How neural technologies could transform the future of design
Early specifications, outlines, rough-sketches, and photos, among others, are all essential in expressing ideas.
Yet, the core paradigm of CAD software, parametric CAD, has remained largely unchanged for 40 years. These engines, built using traditional software technology, rely on a keyboard, mouse and typical user interface. Creators must explicitly define the parameters like the dimensions of an object, which can restrict creativity.
Creators need technology that does not obstruct their creative flow. Instead, tools should enable rapid and iterative exploration of ideas, regardless of how detailed or approximate those ideas might be. Over the past decade, research has underscored that such early-stage explorations significantly enhance project metrics like cost, sustainability, and suitability. That is why recent advancements in neural technology are generating such excitement – and here is why.
The new era of design and make technology
Today, we are entering a transformative era in design and make technology. Picture a scenario where a computer comprehends spoken language, sketches, three-dimensional design data and industry-specific workflows. Now, enrich this scenario with decades of the team's project knowledge. That is the work of pioneering neural AI foundation models that are focussed on design and make problems.
Neural CAD engines enhance legacy parametric engines, offering novel ways to explore solutions and generate geometry. They still honour the traditional precision and control designers require, but most importantly, they introduce entirely new and richer ways to express and interact with ideas.
Unlike parametric CAD, they are learning and improving, so that they constantly align better to creators' needs and ways of working. Meanwhile, they can reliably reason about the three-dimension and physical world.
Ultimately, these advancements are aimed at making creators faster, smarter and more competitive. The quicker they can turn around designs, the faster they can move products to markets – but the impact goes beyond speed. AI-powered design tools can directly reason about the immense complexity within design creation. It is this that sets the stage for vast improvements across the entire design and make category.
New modes of human-machine interaction
Imagine you are a product designer exploring some early-stage product concepts. While it is well known that AI image generators can quickly create conceptual images, now consider that instead of just an image, you can generate a highly detailed CAD model just as easily. Simply by entering text prompts or sketches, the neural CAD engine will start to produce options instantly.
The deep neural networks behind this technology are directly reasoning through the surfaces, the edges, and the topology that would satisfy any request. And it does not just produce one model but multiple, allowing users to quickly explore the trade-offs between the different options. The result is first-class, editable CAD geometry, so that the design can be immediately usable.
What this looks like in reality
Buildings are a great example of the transformation these systems can provide, given their interrelated levels of details, representations and components. Changing something in one place invariably means creating or updating associated representations and structures. Much of this work is laborious, like placing grid lines, columns, floor layouts or building codes. It slows down creators' ability to explore the alternatives and rapidly implement changes.
Consider an architectural massing model that outlines the proposed shape for a new building. As the designer directly manipulates the shape, the neural CAD engine responds to these changes, auto-generating floor plan layouts. Generative AI creates a simple floor plan instantly, but for this to be truly useful, a designer needs to control the generation.
Leveraging the multi-modal interaction enabled by neural CAD, they can lock down certain elements, such as a specific stairwell, hallway or room, and then use natural language prompts to instruct the software to make changes to the structural material.
Combining the power of the large language models, the software can recompute the locations and sizes of the columns and create an entirely new floor layout, all while honouring the constraints the designer specified.
Empowering more intuitive, collaborative and accessible design
In the future, these neural CAD foundation models will be customisable, allowing them to be tuned to an organisation's proprietary data and processes. The integration of these models will transcend the capabilities of today's tools, transform the creative process and make it more intuitive, collaborative and abundant.
The convergence of these technologies is not just about enhancing productivity: it is about making design and creation processes more intuitive, collaborative and accessible for all.
This article first featured in ATJ issue 156