Programmable Architecture

-Towards Human Interactive, Cybernetic Architecture-

Kensuke Hotta (B.Eng, M.Eng, Msc)
Architectural Association School of Architecture, 2013

Chapter 7
Experiment 2: Human Assisted GA with Processing

7-1 . Introduction

In the previous chapter, it was seen that Galapagos has several limitations. Especially in the case of a dynamic fitness landscape, the real-time kinetic candidate was unable to achieve an ideal result (the ‘kinetic ultimate’ candidate). In this chapter, using another program, Processing, which is a version of JAVA, a more flexible system will be created which allows user participation in the process.

fig 7-1, 1 The screenshot of the execution window

This figure shows the actual window shot of the program. There are 4 components of the proposed tensegrity structure in this experiment. Black bold lines show the compression sticks and brown thin lines show tension wire, Blue bars show suspension, and Green triangles show the membranes as roof, in tensegrity structure. Also black grid shows the ground plane. Yellow lines are the connection lines between moving point and crosspoint of the grid that indicates lays from the sun to the ground. The red triangles are the shadows of the green membranes . Those are dynamic systems.

fig 7-2, 1 : System diagram

The proposing system consists of 2 inputs, one is sensor and another is device. Those datas are treated and result in a shape change of the kinetic roof.

The biggest difference and improvement from the previous model is the Sandwich method. While the previous model only allowed one input, namely the environmental input from the sun, this model has 2 inputs, one the top-down environmental input similar to the GA, and the other being a bottom-up user input explained below. Obviously, this is not a pure GA system, but rather a human-interactive evolutionary algorithm.

To understand the system’s processes diagrams are used. The roof is made of a tensegrity structure which has the ability to change shape with shape memory alloy.

fig 7-2 , 2 : System Diagram step1

Usually a kinetic roof works as an automatic top down system by using a Genetic Algorithm (GA). Each genome or gene is equal to the rest length of the tension member. Initially each component has 12 genomes, each between 160 and 200 (dimensions in GA unit-less). The components are updated periodically, for instance every 10 minutes, depending on the computational power of the computer. Each component has 12 genomes each between 160 <x<200 (= length of each wire). These numbers are set because they correspond with the values from the previous experiment. In chapter 6, each genome had a rest length between 0.80<x<1.00 where x represents their overall length. If a larger shrinking rate such as 0.60 or 0.40 of x were to be chosen, the system would obviously be able to achieve a higher score, but this would change the parameters of the experiment.

Weighting is key to solving the conflicting input between individuals as well as the conflict between individual and global inputs. Each user is set a particular weighting by the system authority. This could be just temporary, and it could be changeable. This weighting translates into the number of components a user can control. When in a given area different inputs are encountered this weighting provides a compromise answer.

fig 7-2 , 3 : System Diagram step 2

fig 7-2 , 4 : System Diagram step 3

Instead of ‘random’ mutation, user input is used to seed the next generation. This will help reduce the calculation time. This may, however, create the tendency for the system to tend towards the local optimal solution, which would be considered a negative effect. In this experiment the user input signal is a signal from a mobile as was demonstrated in the previous chapter. This can be of help as it is different from sensor input. Mobile device input provides an immediate channel of input which can be translated into a local deformation of the roof. However, this can affect the global GA, because global and local hierarchies are inter-connected to each other. In other words, a user can edit a system’s local genome which may have a future positive or negative effect. The central GA will decide whether this signal is useful or harmful. If deemed to be harmful, it will be eliminated sooner or later using a kill strategy [Note: kill strategy is a technical term in GA, means “it will be made to disappear” ]. User’s local input remains over several generations.

fig 7-2 , 5 : System Diagram step 4

Subsequently vestiges of the input may or may not remain, depending on later calculations. This procedure is shown as a diffusion process.

7-3 . Before the Experiment, Preparation and Model Details

7-3-1 : Traer physics for simulate spring

In formal terms, the Particle System is defined as “a method for modeling fuzzy objects such as fire, clouds, and water. Particle systems model an object as a cloud of primitive particles that define its volume.”(Reeves, 1983) In this thesis a library is used for expanding Processing’s functions, called Trær.Physics 3.0. This is a simple particle system physics engine for processing. Dr. Jeffrey Trær Bernstein (Bernstein) designed this to be application and domain agnostic. Its creator said “all this is supposed to do is let users make particles, apply forces and calculate the positions of particles over time in real-time.” This library consists of four parts; one is the Particle System which takes care of gravity, drag, making particles, applying forces and advancing the simulation. Another is the Particles which move around in 3D space based on forces you've applied to them. A third part is Springs which controls the interaction between two particles and the last is Attractions which also controls the interactions between two particles. Typical syntax referring to the Springs system is described below.

fig 7-3-1,1: Screenshot of simple pendulum

The code is working under 'Processing'

fig 7-3-1,2 : Sample syntax of 'Simple pendulum'

This is a simple sample code which shows a dynamic spring pendulum from the Trær home page.(Bernstein). The procedure is discussed here in brief, only focusing on the spring part. Firstly in lines 3 and 4, two particles are made, one being free to move and the other having a fixed position. These are named ‘p’, and ‘anchor’ in this code. Secondly in line 14, these particles are connected by a spring using ‘physicsmakeSpring()’ equal to ‘Spring makeSpring( Particle a, Particle b, float strength, float damping, float restLength )’. This is the core line for a spring’s creation, the same as in the previous chapter. Following Hooke’s Law, the user needs to decide and control the spring’s strength and damping and rest length. Springs connect 2 particles and attempt to keep them a certain distance apart. The Spring has 3 properties:

1)Rest Length - the spring wants to be at this length and acts on the particles to push or pull them exactly this distance apart at all times.

2)Strength - If the spring is strong it acts like a stick; if it is weak it takes a long time to return to its rest length.

3)Damping - If a spring has a high damping it doesn’t overshoot and it settles down quickly, with low damping it oscillates.

In this instance Strength was set at 0.5, Damping was set at 0.1, Rest and Length were set at 75, (all numbers are unit less). Finally on line 35, this spring is drawn with a simple ‘line’ syntax, within the ‘void draw’. Then on the screen everyone can see how it behaves.

7-3-2 : The detail of GA in Processing

Basically, almost all the setup is the same as the Grasshopper’s experiment in chapter 6, for experimental fairness. In brief the experiment was carried out with the sun set in the northern hemisphere, moving dynamically through 180 degrees. This number assumes that the sun rises from the east horizontal line moving to the top of the sky sphere before setting on the west horizontal line. The various arguments and parameters are detailed below. 

1) In terms of frequent change parameters on the top (= Global parameters),

fig 7-3-2, 5: A part of syntax : selection

fig 7-3-2, 6: A Diagram of different mating probability in roulette selection

In the left chart, each bar is equal to the green part (The difference between fitness and minimum fitness), but coloured different way for better understanding. These numbers will be the source of the roulette. The length of the right colorful bar shows each ratio of candidates of the parents. The higher fit individuals tend to have a wide range in this diagram, it means The higher fit individual has a higher probability of mating.

From the value of the fit on each individual, make a ‘total’ value by conducting minus minimum fit value. Then make a ‘papa_r’, ‘mama_r’ with random integers in a range between 0~total. Then parents’ genes are decided.

fig 7-3-2, 5: A part of syntax: mating

5) The system will figure out whether the next execution is needed using end rule or divergence calculations.

Based on the selected parents, the system generates the next generation with one point crossover. In addition one of the best fit genes and one random gene are translated to the next generation.

This is repeated ‘generation_lim’ times, and its convergence can be observed.

fig 7-3-2, 6: A part of syntax

7-3-3 : The Pre-conditions

Before the final experiment starts, confirmation is needed that the basic system is working properly. This system needs to be operating under the same conditions as chapter 6’s grasshopper based system. This means that the ‘Realtime kinetic’ candidate can have a slightly better score than the ‘fixed’ candidate, but it obviously cannot achieve the ideal score of the ’Kinetic ultimate’ candidate within normal GA. It is unfortunately not easy to handle algorithms on GA so this system has been manually constructed by the author. The actual data, which was measured with this system, is shown as a graph below and a table in the appendix (see appendix).

fig 7-3-3,1 Three candidates comparison

This graph mainly depicts the same information as in chapter 6, firstly the vertical axis shows the number of pixels in shadow which is equal to the shadow area (the numbers for the shadow are bigger than those in chapter 6 because of a different counting method. The units can be ignored.). The horizontal axis shows the angle of the sun from 0 degrees to 180 degrees. This indicates the movement of the sun from the east horizontal line to the west horizontal line in the northern hemisphere. The candidates are set up as in chapter 6. The ‘Fixed Score’ candidate which is shown as a gray line is the fixed roof. These values are used as the most basic benchmark for comparison with the kinetic roof. The blue line shows the ’Realtime Kinetic’ candidate, as the roof which adjusts in real time using GA. The last line represents the ideal condition, called the ‘Kinetic ultimate’. Those scores are measured in one degree increments; each degree being executed over 170 generations. This took almost 30 minutes per degree to calculate. The author’s reasoning behind deciding to assume that 170 generations are enough for execution is elucidated below.

fig 7-3-3, 2: The difference of each two candidate

For a variety of reasons, the differences of each candidate look small in the above graph(fig 7-3-3), therefore, other representations were used for clarity’s sake. The difference between the ‘Kinetic Ultimate’ candidate and the ‘Fixed’ candidate was set as blue and between the ‘Kinetic ultimate’ candidate and the ‘Realtime kinetic’ candidate as red in figure 7-3-4. It is obvious from the graph, that the blue area is bigger than the red. This shows that while the ‘Realtime kinetic’ candidate has a better result than the ‘Fixed’ candidate there is still a gap between it and the ideal result. This gives the same result as in chapter 6 thus confirming this experimental model as equivalent to that in chapter 6.

7-3-4 : Deciding how to finish calculations

Many sources point to the difficulty in trying to decide how much calculation time is enough for GA. These graphs (fig 7-3-4,1) are an example of the GA calculations after 300 generations (Tables are in the appendix.). These results were obtained by having the sun-fixed at 50 degrees in each generation which has 10 individuals and repeating it 10 times using the same settings. From this pre-experiment, it is obvious that GA is for some reason an unstable algorithm. Even after 300 generations using the same settings, the highest mark varied. Calculation time, however, has to be decided in actual use, therefore, an appropriate time duration needs to be decided. Green dots show the unique highest mark point. Blue dots are the highest points in the two dimensional approximation. The average value of the green dots is 143.9 generations, and blue dots’ average is 160.8 generations. From these two data it seems clear that anything over 160 generations is enough for this setting thus 170 generations is more than adequate.

fig 7-3-4,1 : Ten trial of GA in 300 generations

fig 7-4,1: Program Execution Interface

The figure (fig 7-4,1) shows an actual display shot during the program’s execution. When the user clicks ‘Run’ in the main code window, other windows will show up, one is the control window, another is the execution window. In this Program mainly 4 things are happening;

1) Physical tensegrity simulation,

2) GA for shape optimization,

3) User interruption(help) controlled by a slider and

4) Area Calculation of shadows.

It attempts to keep the conditions as identical as possible with the previous experiments in chapter 6, but it was not possible to preserve all of the settings.

On the following pages, 5 different executions are shown as graphs. Each graph shows a unique 'GA + Human' method, the three candidates: (Fixed, Real-time-Kinetic, Kinetic-Ultimate) are the same. Each 'GA + Human’ method’s score depends on the proficiency of the human and the timing (number of interventions.) Five different cases are shown here to give a representative overview.

fig 7-4,2: Chap7 Experiment Result, Execution-1

fig 7-4,3: Chap7 Experiment Result, Execution-2

fig 7-4,4: Chap7 Experiment Result, Execution-3

fig 7-4,5: Chap7 Experiment Result, Execution-4

fig 7-4,6: Chap7 Experiment Result, Execution-5

For consideration

1) When human input works properly, ‘GA+ Human’ method almost approaches the ideal score of ‘Kinetic ultimate’ for certain periods. Moreover in some instances, ‘GA+ Human’ exceeds the ‘Kinetic ultimate’ for short periods of time. One can assume that human intelligence can predict what is going to happen next within a short time span. However, GA cannot predict, and calculation is executed in a discrete model. Hence the ‘GA+ Human’ method can exceed the ‘Kinetic ultimate’, momentarily. The ‘Kinetic-Ultimate’ methods are ideal scores, but it does not mean they are the best scores. There are two reasons why this happens. One is that the ‘Kinetic-Ultimate’ does not achieve the best answer. As was explained in the previous chapter, there is no end mark for finishing a calculation. In this case each degree is shown as the result of 300 generations of calculations, but this number may not be enough. Another reason is this GA’s settings may not be appropriate to get the best score. There are many parameters and methods in GA, and the result is sensitive. Because of the limitations of the time, the concern about GA itself is left for further investigation.

2) When the ‘Kinetic GA’ is observed with the human eye, it is obvious that sometimes the roof takes an inefficient form to maximum the shaped area. This is a necessary step of GA for evaluating genes, but this results in noise in real time. The line of the ‘Real-time Kinetic’ (Simple GA) does not show a temporal fluctuation since it is an ongoing calculation, so the score sometimes does not follow the ideal value (Kinetic Ultimate).

fig 7-4,7: Total sum of each method in the 5 Experiment in chapter7

When it is seen as the sum of the scores, generally the scores of ‘GA+ Human’ are unstable. However roughly it is also the fact that the score of ‘GA+ Human’ is better than that of the ‘Fixed’ method, and sometimes exceeds the ‘Real-time Kinetic’ method which consists of simple GA. But it has little possibility of exceeding the ‘kinetic ultimate’ method overall.

7-5 . Argument and Conclusion

Discrepancy points between the first model (Chapter 6) and this model (Chapter 7) are raised below. In this thesis, these discrepancies are not a part of the central argument, therefore can be ignored. A completely different model would have sufficed for the proposition set in Chapter 3 but the key was the step by step exploration of this model. The Grasshopper experiment raised issues and then it made hypotheses that have been tested in this chapter. In this chapter the mathematical model (Processing code) confirmed the same minimum set of points; showing that normal GA does not work effectively where dynamic fitness calculations are required. Here a model using Interactive Evolutionary Computation (IEC) was designed and tested. While reaching similar conclusions these chapters could be said to be logically independent.

The first key difference relates to the physical tensegrity simulations themselves. Firstly, the gravities between the first model (Grasshopper definition in Chapter 6) and the second model (Processing in chapter 7) are different. Both physical engines (Kangaroo and Traer physics3.0) are unit-less. In addition their materials are massless and it leads to weight-less calculation. In addition, the scale factor is ignored. For architecture, this property is important from the structural point of view. However in this situation, it doesn’t affect the shadow area and it was just adjusted manually as appropriate, similar to the deformation adjustments. Secondly, the available spring parameters are different, and they are set at different values. For example, a spring’s character is decided by three parameters-strength, damping and rest-length. These settings are different in each model, but checked to insure they behave in a similar way visually. There is no way to validate the values therefore the data is necessarily equivalent between the two models.

The second key issue relates to the software used. In GA, as explained before, the algorithm consists of a number of procedures, and each procedure has a number of settings and parameters. Essentially it is impossible to remake the same GA engine unless it were completely open source. Although the author tried to make the GA in chapter 7 the same as in Galapagos, they are different. Moreover, Galapagos is of course a better, more polished engine but it is impossible to say whether it’s solutions are better or worse than the other. It is not possible to define the word ‘best solution’ here, as it depends on the set objective.

It creates a dilemma between having as accurate an answer as possible by sacrificing a short calculation time, or having as quick as possible an answer by limiting its accuracy. This requires adjusting the settings to achieve an appropriate compromise solution. This dilemma, however, does not affect the final conclusions of this thesis. Even in the GA which had worse efficiency it managed to generate scores across all model types. The two models performed experiments independently making comparison of actual scores difficult. The more important scores, therefore, are the proportional scores between the ‘Fixed Roof Score’, the ‘Pure GA score’ and the ‘human- helped GA score’, when compared to the ‘Ultimate Score’ which is set to the objective of 100% shade coverage.

The final key difference between the models relates to the area calculation of shadows. The total area of shadow and its range are different because the way of measuring it is different. There are several reasons for this. The Grasshopper component called ‘exposure’ is a sort of black box and the evaluation mesh plane was arbitrarily set as a rectangle in Grasshopper. In Processing, however, instead of mesh subdivisions, pixels are counted in the execution window. This problem can be solved by using logarithmic calculations and the final result is not affected by the value range as, again, it is the proportion between maximum values and measured data which is critical.


Human-assisted GA achieves a better score than a pure GA system. As the score indicates the objective function, the shaded area is always maximum with human-assisted GA. In other words, the environment underneath the roof is darker. Human-input can overcome GA’s weak point; unassisted GA is not effective for the ever changing fitness landscape of a real time kinetic roof. Rather than a completely autonomous system that is randomly seeded, some outer intelligence can help to improve the architectural system.