Chapter 6

Experiment 1: Simple Real-Time GA with Grasshopper

6-1 . Introduction

Continuing the focus on illumination in chapter 5, this chapter looks at a series of experiments to measure the shading capacity of various roof types under different light exposures using Rhinoceros and Grasshopper. The special feature of this construction model was that, unlike a traditional CAD model, it was not static but dynamic using a variety of techniques discussed later. The construction model was made with 'Kangaroo Physics', a Grasshopper plug-in which contains a physics engine allowing physical modeling, and 'Galapagos', another plug-in which contains the GA (genetic algorithm) components and was used for optimization. Real time experiments require a dynamic model as well as a set time period. This second aspect, the set time period, became a separate research issue.

fig 6-1,1 : Program Execution Display Shot

This figure shows the interfaces of Rhinoceros and Grasshopper and Galapagos.

6-2 . Aim

The aim of these experiments was to prove that the proposed kinetic roof adapts most effectively to sun exposure over time providing the best result. In the first experiment, the number of shadows were counted by using the ‘exposure’ component from Grasshopper-the greater the area of shade, the better. In other words, the Objective Function (the necessary light level) was a constant in this experiment. The second experiment measured how much three different roof systems-fixed, pre-optimized and kinetic-could follow a fluctuating Objective Function. This second experiment measured the systems adaptability, focusing on two different issues - the margin of error in trying to match to Objective Function and the other the speed at which they adapted.

6-3 .Preparation and Mathematical Definitions of the Model

The main objects in this digital experiment can be divided into 5 parts functionally (fig 6-3,1). These parts are going to be explained respectively.

fig 6-3,1: The Whole Grasshopper Definition with parts

Part one composes the static components of the tensegrity structure, part two is the spring system using the physics engine, part three makes up the membranes connected to the tensegrity structure, part four is the sun and its vector, and part five is the evaluative and recording function.

6-3-1. Static Tensegrity Structure (part1)


Firstly, the static tensegrity roof is made by the combination of Rhinoceros’s modeling and Grasshopper's group of components. (fig 6-3-1,1), (fig 6-3-1,2) . The basic form of the tensegrity components is made in Rhinoceros and pointed out as Grasshopper's component. Those are going to be arrayed as X and Y directions. Array in Grasshopper is simply used for 'Series' and 'Move' components. In this experiment 2 by 2, total 4 components are used. The roof surface is an array made of the developed component. The component consists of 3 upper cables, 3 lower cables, 6 side cables, 6 struts and 1 central spring. By arraying this component on a 2-dimensional array along the X and Y axes, a roof-like surface is produced.

fig 6-3-1,1: One Tensegrity Component

One of tensegrity components consists of 3 upper cables, 3 lower cables, 6 side cables, 6 struts and 1 central spring for suspension.

Fig 6-3-1,2: Arrayed Tensegrity Components

By using the 'Move' component in grasshopper, tensegrity components are arrayed 2-dimensional along the X direction vector and the Y direction vector. In this way, a roof-like surface is produced.

The component shape is hexagonal so this array is laid out more like a honeycomb rather than perpendicularly.

fig 6-3-2, 1 :Hooke's law (fig from Wikipedia)

Fig 6-3-2,2: Cables and Their Shortest Length

The data from the previous part (Part 1), is treated as flattened data using the 'Bang!' component. The tension wires, compression members and central suspension are handled by Kangaroo Physics. The material properties are different but can be adjusted using several parameters.

Fig 6-3-2,3: Core parts of the Spring

The line data from the previous part (Part 2), is connected to the 'Springs' component. This component enables the lines to make a dynamic object. There are parameters such as 'Stiffness', 'Damping', 'RestLength', 'Cutoff', 'Compfail' etc. However only 'Stiffness' and 'RestLength' are used.

Fig 6-3-2,4: Physics Engine

The spring data from the previous part (Part 2-2), is connected to the 'Kangaroo' component. This is the core engine of Kangaroo physics. It needs to be connected to 'Kangaroo settings'. The 'Kangaroo' component outputs dynamically deformed geometry from its right side.

Fig 6-3-2,5: Colour of the Strings

Through Kangaroo physics, static lines become dynamic springs. The degree of shrinking is visualized using colour. The original RestLength is set at 1.0 while the lower limit (green) is set at 0.9 and the upper limit (red) is set at 1.2.

6-3-3. Membranes on the tensegrity structure (Part3)

Fig 6-3-3,1: Membranes

Some of the point data of part 1 (6-3-1) is used for the triangle membranes. In this experiment four tensegrity components are used, so the number of triangle membranes is also four. Three vertices are manually selected to make dynamic surfaces.

6-3-4. The Sun (Part4)


The dynamic sun is defined here. The speed of the sun, position of the sun, initial position of the sun, etc. can be statically controlled but in the experiment the sun’s movement is controlled by an automated graph which has time duration using the 'timer' component in grasshopper.

fig 6-3-4,1: Grasshopper Sun Definition

The first part of the model, the sun, moves from one side of the digital ‘world’ to the other side being defined from (- π /2) to (+ π /2) in Grasshopper. Its path moves along a semicircle, and its directional vector always points towards the center of the evaluating plane. By using the 'Fad er' function with the timer in the 'Firefly' plug-in, the sun’s movement can be automated here. The solar altitude is set at 90 degree (note: this would be the location of the sun on the equator at the solar equinox) though the real sun’s path is not a pure perpendicular 2 dimensional semi circle.

6-3-5. The Evaluative and Recording function (Part5)

In the fifth part of the model, the shadow of the roof’s membranes is calculated using a mathematical function tied to the evaluation mesh surface. Specifically, the number of shadows are going to be counted mesh by mesh, using the ’exposure’ component. Then that data will be recorded and output as Excel data in a *.csv file.

fig 6-3-5,1: Grasshopper Evaluation Definition

The evaluation plane is made of a mesh surface. This mesh in this experiment is subdivided into an 11x11 grid with 121 cells. If more accuracy is needed a larger grid such as 100x100 grids with 10,000 cells could be used although this might cause a delay in processing, depending on one's computer’s speed. Each cell or mesh component is connected to an 'Exposure' component, which returns a binary value of ‘1’ or ‘0’ which indicates whether the mesh is exposed directly to the sun ('0') or hidden in the shadow ('1'). In the second experiment using multiple roof styles (as explained below), the exposure number is more complex. In some cases where there are multiple light sources (other exposure sources; sun; reflected light), the returned value may be more than 1, and not a binary value. Mathematically that data has to be dealt with using alternative operations. This data is connected to a 'Gradient' component which allows it to be visualized in colour. It is also connected to the 'Data Recorder' for exporting into Excel.

6-4 . The Unique Feature / Limitation of ‘Galapagos’

Here, instead of considering the general procedure for using GA, the specific features and limitations of Galapagos’s implementation of GA are discussed, point by point. These points are unique features, sometimes work effectively but could work ineffectively. The process called 'step' below conforms to Chapter 3-3-3’s step.

• Step 1-generating the initial group: Galapagos has an Initial Boost factor setting which multiplies the number of individuals in Generation Zero.

• Step 2-N/A

• Step 3-selection procedure: Galapagos provides Isotropic Selection and Exclusive Selection options for selecting the parent components for the next generation. In this experiment those options are randomly assigned. According to creator David Rotten, these methods are basic and limited but currently good enough for the algorithm. Biased Selection (those are explained above) , though common in nature, is not applied here.

• Step 4-crossover: There is a limitation on the ‘Coupling Process’ and the ‘Mating Process’. Unlike other versions of GA, in Galapagos the only allowed way to choose a mate is by Genomic Distance on the Genome Map (explanation below). Once an individual has been selected by the Selection process, a mate or partner needs to be found through the Coupling process which then leads to the Mating process. There are two extremes in the mating process with the Genome Map; one is incestuous mating the other is zoophilic mating (these terms are explained below). These are both problematic for different reasons but by using both in Galapagos a balance is achieved between in-breeding (incestuous mating) and out-breeding (zoophilic mating). A balance point or breeding factor selection allows one to select individuals that are not too close to oneself and not too different. Galapagos’ creator leaves breeding factor manipulatable; (between -100% and +100%, total out-breeding vs. total in-breeding respectively). In Galapagos, “mate selection at present completely ignores mate fitness. This is something that needs looking into for future releases, but even without any advanced selection algorithms the solver still works.”(from his website)

fig 6-4,1 : Genome-Map and Mating Method (Drawn by K.Hotta)

The ‘Genome-Map’ is useful in adjusting the breeding factor and other aspects of the mating process. The Genome Map is an approximate 2 dimensional map made by using projections from higher dimensions. The number of genes (an N-dimensional value) is compressed to 2 dimensions. (It is assumed that all the genomes in a species have the same number of genes. This is not technically a limitation of Evolutionary Algorithms, even though it is currently a limitation of Galapagos). All the individuals in a certain population are demonstrated as dots on a 2 dimensional grid (this grid axis does not have a meaning because it is a projection). The distance between two dots on this grid is practically analogous with the distance between the individual’s genomes in higher dimensional gene-space. The only information a Genome Map conveys is which genomes are more similar (close together) and which genomes are more different (far apart).

‘Incestuous mating’ behaviour occurs when the individual is able to find their potential partner from their neighbourhood. In the Genome map, individuals who are closer are very much like each other; they have a tendency to have similarities in their genome. The considerable risk of incest in Evolutionary Solvers including GA is a rapid decline in population diversity. Low diversity makes the chances of finding alternate solutions decrease, and thus it risks getting stuck in locally optimum settings.

In contrast ‘Zoophilic Mating’ is the method of mating by excluding everyone near the one who is mating. Those genomes are totally different and often incompatible as the genome distance is huge. This method is also recognized as harmful especially when the fitness landscape has multiple mountains ( fitness landscape is defined in Chapter3-3-3), or a population has more than a single group of genomes. Here, in iteration, candidates are climbing their own little fitness peak. When two different individuals, who are climbing different mountains, make offspring generally, they tend to fall down into a fitness valley somewhere in the middle of the fitness landscape, in other words they tend to be non-optimal and not fit for purpose.


• Step 5-Coalescence; Galapagos has two methods for Coalescence. One is Crossover Coalescence and the other is Blend Coalescence with blending preferences (Both terms will be explained below). In Galapagos, currently these methods are randomly implemented. Genes in evolutionary solvers such as Galapagos behave like floating point numbers.

• Step 6-Mutation; Currently Mutation methods in Galapagos are restricted to Point Mutations (explanation below).

fig 6-4,2 Genome Graph and Mutation (Drawn by K.Hotta)

As the ‘genome’ or gene sequence is the key driver for GA, the Genome Graph is one useful way to visualize gene sequences as a 2 dimensional graph. Thanks to this way of representation, subspecies or lone species can be identified at a glance. A common method for displaying multi-dimensional points on a two- dimensional medium is to draw them as a series of lines that connect different values on a set of vertical bars. Each bar represents a single dimension. This enables people to quite easily display not just points with any number of dimensions, but even points with a different number of dimensions in the same graph. In the fig 5-4-1, the example genome, which consists of 5 genomes, is shown. It is common in actual GA procedures as well as in Galapagos, for each value or gene to be given a decimal value between 0.0 and 1.0, such as G0=0.25, G1=0.5, G2=1.0, G3=0.5,G4=0.5 on the graph. In this way, every unique genome has a unique graph line.

This figure illustrates a Point Mutation, where a single gene value has changed. The original value of G2 was 1.0 but it became 0.75 after this operation. This is currently the only mutation type that is possible in Galapagos. An alternative method, Inversion Mutation is available where two adjacent gene values are swapped. This operation has a significant benefit only when neighbouring genes have a very explicit relationship. Otherwise, it will probably have a detrimental effect on the procedure as this operation produces too huge a modification. In the fig 5-4-1, G0 and G1’s values have been swapped horizontally. Davit Rutten also mentions Addition Mutations and Deletion Mutations, which affect the number of genes. At present Galapagos only works on fixed-size genomes, but this is not a logical or practical limitation and this may be overcome in future releases.

6-5 . Four Candidates

There are 4 styles of roof surfaces or candidates which are used in the GA experiment. These are identified as 'Fixed', 'Pre-optimized’, ‘Kinetic' and 'Kinetic ultimate'. The first candidate is the default, static model called 'Fixed roof' made of the developed components. The second one is called 'Pre-optimized', which is also a static- fixed roof but has an optimized shape for its objective–to generate the maximum amount of shadow. This optimization is done in GA (Galapagos). The third candidate, 'Kinetic roof' also uses GA but is running in real time, the duration being arbitrarily decided. It was done for various time periods: 100000 milliseconds (=100 seconds =1munites 40seconds), 300000 milliseconds (=300seconds =5minutes), 500000 milliseconds (=500secods = 8minutes20secods), 700000 milliseconds (700seconds =11minutes 40 seconds), 900000 milliseconds (=900seconds =15minutes). The last candidate is the ultimately optimized shape or ‘Kinetic ultimate’. It is similar to the ‘Kinetic’ candidate but it is not optimized in real time. The shape is optimized at every degree the sun moves. This isn’t, however, done in real time. The rest of the conditions are taken as fixed to maintain experimentally validity, for instance, the number of components, the size of components, the size of the evaluation plane, the number of subdivisions, the unit, anchor points, etc.

6-6 . Graph Approximation and Visualization

The evaluation plane measures the shadow from the various roof candidates and feeds its cells’ data to an Excel spreadsheet. From there the score is converted into graphs to see not only the comparison between candidates but also the changes over time. In addition, for clarity a curve fitting method is utilized. For this operation, a 6 dimensional polynomial approximation curve is employed. This mathematical definition (explained in appendix -1) can be applied to lines from one dimension which is just a linear curve to higher dimension curves which have more inflection points. More dimensions allow one to follow more complex shapes, but higher dimensional curves tend to be too sensitive to data, and may introduce accidental errors. (Runge's phenomenon1). Due to this, if possible, it is better to choose lower dimension curves. Usually the appropriate dimension is chosen by using 'the method of least squares2' (this is also explained in the appendix). The procedures are as follows: firstly, set a polynomial equation then solve this by using a partial differential equation; from this its temporal coefficients will be decided. After this the curve function is revealed, this function is measured by correlation coefficient3. Especially in Excel its square (r^2) is used as an indicator to judge whether this function fits well or not. This indicator(r^2) moves from -1 to 1. Basically the better fit the curve marks, the closer it is to 1 (see appendix). These methods are taken from probability theory.

6-7. Comparison of the 4 Candidates


In this part, four different candidates are going to be examined: Fixed, Pre-optimised, Realtime-Kinetic, and Ultimate-Kinetic roof. The result is below.

fig 6-7,1: Comparison in 4 candidates (drawn by K.Hotta)

The solid lines have had the curve-fitting method applied to them while the dotted lines are the actual scores. The vertical axis shows the score, which is the number of shadows on the evaluation plane. The horizontal axis indicates the sun's location in degrees. The very beginning and the very end of the solid lines leap up but those can be ignored as a side effect of the curve fitting methodology.

For Consideration;

The following three results are noted:

1). All candidates except the ‘Pre-optimized’ candidate show 2 peaks, around -50, and 40 on the sun axis.

2). Except for a few exceptions in the middle, the 'Kinetic ultimate' candidate has the highest

score. In contrast the 'Fixed' and 'Kinetic' candidates measured the lowest scores, except in the -70 to -50 range where the 'Pre-optimized' candidate has a lower score.

3). The 'Pre-optimized' candidate has relatively low scores in the beginning (from -75 to -10), and is lower than the 'Fixed' candidate’s score at the very beginning (from -70 to -50). This candidate has nearly the same score as the 'Kinetic ultimate' candidate, which is the highest score from the middle to the end of the sun path (-10 to 60).

It was anticipated by the author that the 'Kinetic ultimate' candidate would record the highest score over the overall period. The 'Pre-optimized' candidate has directionality with regards to the sun vector, which is why it marks a high score at the end of the period while sacrificing performance in the initial part. The unexpected result came from the ‘Kinetic' candidate. It had a slightly higher score than the 'Fixed' candidate, but lower than the rest. Thinking that there might be something wrong in the way the GA was being used, the experiment was tried several times but with similar results. It is possible, this algorithm may not be appropriate for use in real-time because of the necessary calculation time.

6-8 . The Comparing the Computing Time for the Kinetic Candidates

Reflecting on the results for the ‘Kinetic’ candidate ‘a lack of calculation time’ is proposed as the reason for its poor results. GA generally takes time to complete its procedures, so in the next phase several time periods are examined and compared to see if that could have an impact on the ‘Kinetic’ candidate’s score.

fig 6-8,1: The comparison of different computing times, using the Kinetic candidate.

In this graph, similar to the previous graph, the solid line and dotted lines indicate the curve-fitting methodology and the actual scores. The vertical axis and horizontal axis show the shadow score, and sun’s location in degrees respectively. In this graph all the candidates are kinetically controlled roofs or ‘Kinetic’ candidates but different time periods are used in conducting the experiment. For example, one of the candidates, 'Kinetic 100s' is evaluated over a period of 100 seconds; ‘Kinetic300’ is evaluated over a period of 300 seconds. The other conditions are kept constant. The high score of the ‘Kinetic ultimate’ candidate is included in red.

For Consideration;

The results are summarized below:

1) There is a huge gap between the 'Kinetic ultimate' candidate (the red line) and all other candidates, except at the very beginning(from -90 to -60) and the very end (from 55 to 90 ) of the time period.

2) The difference in performance between the 'Kinetic ultimate' candidate and others is a factor of 3 in the first half, and it gradually shrinks to a factor of 1.5 in the latter half.

3) Within the kinetic candidates, longer time periods were supposed to have better results, but this is unsubstantiated.

When the scores were compared at different angles (the x axis in the graph), the beginning (from -75 degree to 0 degree) had a lower score than the end part (from 20 degree to 54 degree). This result is understandable from the point of view that generally GA takes time to shrink its population. Contrary to expectation, however, even when sufficient calculation time is allowed, the result still did not rise effectively towards the ideal result (ie. the ‘Kinetic ultimate’ candidate). Therefore another solution needs to be adopted in order to create a higher score in the real time calculations.

6-9 . The Comparison between the different number of resets within kinetic candidates

('resets' are going to be explained below)


The previous results raise two interesting issues. Firstly, when the first graph (fig 6-7,1) is examined carefully, it is seen that initially the 'Kinetic' candidate has a higher score but gradually, its score falls as compared to the 'fixed' candidates. Secondly, the ‘Kinetic’ candidate doesn't score high but it has the same potential as the ideal result, which raises the question as to how the ‘Kinetic' candidate can approach the ideal result of the ‘Kinetic ultimate’ candidate. To answer this question, a hypothetical cause and solution are proposed here. Continuous GA calculation can easily lead to a lack of variation in the population and lower performance. If this supposition is right, stopping or chopping the algorithm and resetting it repeatedly might be used to boost GA results. To encourage experiments a special technique called ‘initial boost’ is usually used in GA. This enables GA to have adequate diversity in its population.

Six different reset periods are selected to create six candidates alongside the Kinetic ultimate candidate which is the ideal result created by using the optimized ultimate kinetic system which wasn’t created in real time. The others (Chop10, Chop20, Chop30, Chop60, Chop90, Chop180 (not- chopped)) are generated by re-setting the calculations periodically in real time. For example with the ‘Chop10’ candidate, the sun’s position starts from the horizon (this is called -90 degree). When the sun moves on to -80 degree, the GA calculation resets.

Again when the sun moves to -70 degrees, the GA calculation resets. In this way every 10 degrees, the computing procedure resets. So this candidate is chopped and its calculation reset 18 times in total over the total sun period of 180 degrees.

The graph (fig.6-9,1) on rough examination shows that when the calculation reset interval is short, the score tends to increase. However, even using the smallest interval (the ‘Chop10’ candidate with the calculation resetting every 10 degrees), it cannot achieve the score of the ‘Kinetic ultimate’ candidate. In the earlier half of the time period (from -70 to -10) the score is low, less than half of the ideal score (Kinetic ultimate). In contrast, the latter half achieves a score closer to the ideal score (Kinetic ultimate). It is clear the ‘Kinetic ultimate’ candidate has the highest score over all; in contrast the ‘Chop180’ candidate with no reset has the lowest scores in most columns.

Fig 6-9,1: The Comparison between different chopped calculations

fig 6-9,2: Total sum of the score in six candidates

To create this graph the scores of six candidates were integrated from -90 to +90. This approximation curve is useful for understanding the previous graph (fig 6-8-1). The result shows that the 'Chop30' candidate has the best reset time over the overall period. It was assumed that the more things were reset the more the score would improve, but this graph shows too much chopping leads in fact to a slightly worse score.

For Consideration:

Firstly, chopping time and resetting GA has some effect in getting a large enough population.

When comparing the ‘Chop180’ candidate (this candidate wasn’t reset actually) and the ‘Chop30’ candidate (reset 5 times) this difference in effect becomes clear. Secondly, in the experiment it was clear that in the ‘Chop20’ and ‘Chop10’ candidates the optimizing calculations in GA were not completed which may have lowered their scores. This may depend on the computer’s performance. On the computer used in the experiment, the ‘Chop30’ candidate thus became the most effective.

In this experiment, chopping the time interval for GA calculations was shown to be effective in improving the results, but too much chopping overwhelmed the available computer resulting in slightly worse results. In other words, when the Objective Function is dynamic, keeping a large enough population in GA is important to get an effective result.

6-10 . Discussion and Conclusion

This set of GA experiments demonstrate the result that normal GA does not work effectively for dynamic situations. The score for the real-time kinetic roof was worse than the pre-optimized roof’s score. The following paragraph discusses the possible reason for this and suggests potential solutions.

Usually GA is designed and used for static problems. Here the word ‘static’ reflects two basic elements, 1) the Objective Function never changes while calculation the results and 2) the fitness landscape never changes while calculating the results. However, for this experiment, the changing nature of the Objective Function and the fitness landscape is a given. From here on it will now be referred to as a dynamic GA. The concrete, Objective Function is fluctuating depending on the illumination requirement set arbitrarily by the author corresponding to the needed architectural function. The fitness landscape fluctuates similarly as it is determined by both the Objective Function and the position of the sun both of which were changing.

fig 6-10,1: A diagram of ever changing fitness landscape

The left hand diagram shows the abstract fitness landscape at a certain time (let’s say 10:00AM in here as the initial point). There might be several hills, and one of them is the highest achieved value. In that time frame it may or may not reach the top of the hill, it is not certain but GA can usually achieve a good value. However, in the next time frame (for example 11:00AM: the right hand figure), the previous optimized answer has already become old. Because, the fitness landscape has changed, with the time. This

landscape is determined by various dynamic parameters.

When only the sun’s position is concerned, this ever changing fitness landscape is a continuous function. Because the sun's locus is one curved, continuous line, the sun rising from the east side then going down in the west along a circular path. However, when the actual environmental situation is considered, it is impossible to ignore the disturbances such as clouds which block the sun’s rays or temporary light sources such as light pollution from neighbours. These are unpredictable noise factors that make this fitness landscape a discontinuous function.

A normal GA solver has no idea about the shape of the fitness landscape when it starts calculating.

This is why GA uses special techniques such as initial boost, to quickly identify a rough shape for the fitness landscape. This is why GA is slow and not adaptable in dynamic situations.

From this issue came the idea of using the potential of the initial boost to increase scores again and again. This periodic reset of GA is called ‘chop-time’ by the author. In the above experiment, these ‘chop-time’ candidates generally cannot achieve an optimum curve which is equal to the ‘Kinetic ultimate’ candidate, though there are small jumps in their scores. The reason for this is as follows: Initial boost helps in spreading an individual’s genomes and thus the overall diversity of the members. That in turn sometimes helps in finding a new fitness hill. Unfortunately, when calculation time is chopped too much it results in a lack of time for the answers to “evolve” and be resolved. Therefore, this solution is a double edged sword solving one problem while creating another.

Some hypothetical solutions are proposed below. The key issue to focus on in order to improve the score is how to compensate for the lack of initial information in a moving landscape. Two possibilities are:

• Insert sensor information into the GA’s ongoing calculation as an interruption input.

• Insert user demands into the GA’s ongoing calculations as interruption input.