Over the course of the year we’ll be featuring a series of articles by Christopher Welch on emerging and interesting trends in digital architecture, complementing and expanding on topics explored in talks given around New Zealand.
Nature is full of examples of complex behaviour emerging out of the actions of simple organisms. Fish forming into bait-balls to confuse predators, and birds flocking together to lower wind shear are both examples of this phenomenon. Agent-based simulation modelling techniques can be a useful way of developing emergent forms and structures. These kinds of simulations are very common inside the movie industry, used to simulate large crowds of people, battles, and other complex scenes. While academics and architects have experimented with agent simulation to address a wide range of areas across the architectural field, in this post I want to highlight the potential of using these techniques to develop an approach to aligning digital fabrication with physical realities and closely linking buildability and designer intent.
For those new to the concept, I’ll briefly cover the theory behind agent-based simulation. Starting from first principles, agents represent autonomous units – self contained units governed by a rule-set that produce higher level complexity. In the simplest version of this algorithm, the behaviour of each agent (also known as a boid) in the swarm is governed by only three rules:
Cohesion: Steer towards other agents
Alignment: Align facing with nearby agents
Separation: Maintain a short distance between agents
These three simple rules are what allows birds to flock together, schools of fish to quickly change direction. By experimenting with the these rules and adding new ones, we can replicate the behaviour of ants, bats, and even people. [vimeo http://vimeo.com/16405680] These three simple rules are what allows birds to flock together, schools of fish to quickly change direction. By experimenting with the these rules and adding new ones, we can replicate the behaviour of ants, bats, and even people. Additionally, by limiting the movement of these particles to a surface, or in response to gravity or tension, we can imbue the structure of our model with a certain amount of internal logic. As an exercise, we’ll take a continuous surface we’ve built in Rhino3D that we want to panellise. Using subdivision techniques, we can split the surface into a number of neat rectangular panels… This technique works, and each panel within the set seems to have a roughly similar shape. However, if we deform the original surface and try the same technique…. The panel sizes shift to such extremes that each panel size is completely different. The subdivision still works, from a mathematical sense, but the resultant form is responding to the mathematical construction of the form within the computer, rather than responding to any physical limitations or constraints determined by the designer. Changing tack, we try an agent-driven approach using the Meshmachine plugin for Grasshopper – we randomly distribute agents across the surface, and provide them with the goal of being evenly distributed across the surface. As we stated previously, agents follow instructions based on their current situation and try and optimise themselves without human input. This done by starting in a unsatisfactory way and then resolving into a optimal form over a series of iterations. After a few seconds of simulation, the agents distribute themselves in such a way that there is very little variation in panel size anywhere in the model. The complexity of the panellisation goes far beyond anything that a designer could describe by hand, with the added benefit that each panel in the system has self-organised into an optimum shape based on it’s own local requirements. A variant of this technique is being used by researchers at the University of Stuttgart, using agents to determine the position of hexagonal timber panels in order to create a structurally sound timber shell. The resultant structure is a 50mm thick timber shell with no internal supports, the shape and organisation of each panel optimising itself to deal with the curvature and stress of the form. On the absolute other end of the spectrum, “The Situation Room,” a recent installation by Marc Fornes and theverymany utilises agents in order to define the individual strips that make up the larger wholes of his structures. This is achieved by instructing simulated agents to crawl along the stress lines of the model, leaving trails which form the basis for the aluminium strips that make up the final structure. Each of these elements is prefabricated as 2D aluminium on a flatbed and then assembled together in sections. Up until recently, experimentation with agent-based simulations was limited to re-purposed animation software like Maya, or the more programmer-focused Processing. Earlier this year, however, saw the release of a a Boid library for Grasshopper that has recently been released over at Food4Rhino (Boid additionally requires the plug-in Anemone and comes with an extensive set of tutorials). This tool-set allows you to tweak a number of different behaviours and quickly experiment with the underlying concepts. Thinking about agents and simulation within the design work-flow could represent the next paradigm shift (hah!) in digital architecture. In the age of additive digital fabrication, agents can constrain free-form shapes to the limitations of a tool-path. Workstations on a floor-plan could self-assemble to maximise local conditions within each unit. There are thousands of untapped opportunities, and as architecture shifts away from a static modelling environment and toward an adaptive, reactionary one, these tools will start mattering more and more.