Chapter 3


3-1 . Introduction

In chapter 3, the key methodologies used in this research are described. Firstly key philosophies, both architectural ideas and from other fields, are presented. Following the key, Engineering tools are introduced. These tools include hardware and software tools, as well as specific algorithms. Finally the use of the Genetic Algorithm (GA) is considered and reviewed in detail.

3-2 . Philosophy

3-2-1 . The Philosophy of Programmable Architecture

In parametric architecture, it is not a difficult thing to design an aggregation of one type of component, as long as their edges can touch. A surface can be easily populated with 3D objects. When one takes this sort of design approach it is very important to have in mind 'what kind of performance one is looking for’. Otherwise a designer cannot achieve a constructive (useful/appropriate) solution.

These component-based and bio-inspired designs have been advocated by Michael Weinstock since the 90’s. 'Fitness Criteria' is the key word used by Weinstock to evaluate these design methodologies. These methodologies, increasingly popular in architectural research especially in the United Kingdom and the United States, have happened because of development of computer aided calculations and design. However, these design methods have existed in traditional forms as well. Architecture has a fixed scale as the human body size is immutable creating a basic component module. In addition, component size is always constrained for convenience of construction.

In contrast to these parametric systems, the developed component system is kinetic and thus morphological differentiation is not needed as differentiation occurs through the programmable aspect of the architecture. Due to their visual similarities, frequently PA (programmable architecture) is wrongly taken as Parametric Architecture. These two approaches are slightly different, both at the design system level as well as in their physical modeling (architecture is used here to refer to both a system and a building).

The methodology used in Parametric Architecture consists, firstly in preparing the original components like ’the induced pluripotent stem cells’ (iPS cell) and the global shape, then the component is morphed in order to adapt itself locally to the global shape. In contrast in PA, the global final result is more than the resulting morphing and distribution of the components along the global shape. The components are always identical, reacting locally to the global shape deformations. Parametric Architectural methodology has only been recently developed with the availability of computer aided design software (CAD).

Thanks to the increasing variety of fabrication technologies, a greater variation of shapes can also be materialized. This parallels the 'differentiated' shapes one finds coming from a single component one finds in the biological realm. Unlike in nature, however, these deformed shapes don’t have unique functions. Thus while it is true that Parametric methodology enables free form design and production, it is difficult to identify any other 'architectural' usefulness apart from the architects’ desire to make unique shapes.

3-2-2 . Ubiquitous Architecture

Ubiquitous Computing is a computational model of human-computer interaction in which information processing has been thoroughly integrated into everyday’s objects and activities. More formally, Ubiquitous Computing is defined as “machines that fit the human environment instead of forcing humans to enter theirs.” (J. York, 2004) . These ideas were proposed by Mark Weiser during his tenure as Chief Technologist of the Xerox Palo Alto Research Center (PARC), and raised the phrase “ubiquitous computing” around 1988.

Both alone and with PARC Director and Chief Scientist John Seely Brown, Weiser wrote some of the earliest papers on the subject, largely defining it and sketching out its major concerns (Weiser, 1991) (Weiser, 1996) .

The second aspect is pervasive computing. At their core, all models of ubiquitous computing share a vision of small, inexpensive, robust networked processing devices, distributed at all scales throughout everyday life and generally turned to distinctly common-place ends. For example, a domestic ubiquitous computing environment might interconnect lighting and environmental controls with personal biometric monitors woven into clothing so that illumination and heating conditions in a room might be modulated, continuously and imperceptibly. Another common scenario posits refrigerators “aware” of their suitably tagged contents, able to both plan a variety of menus from the food actually on hand, and warn users of stale or spoiled food. (Refer from Wikipedia) (Sakamura, 2007) .

Cooperative research within the architectural field could be taking place in several organisations. Firstly, Dr. K.Sakamura of University of Tokyo, is to enable any everyday device to broadcast and receive information (Sakamura, 2007) .Secondly, MIT has also contributed significant research in this field, notably Things That Think consortium (directed by Hiroshi Ishii, Joseph A. Paradiso and Rosalind Picard) at the Media Lab and the CSAIL(MIT Computer Science and Artificial Intelligence Laboratory) effort known as Project Oxygen (MIT-Media-Lab, 2007) .

In this project architecture is defined as an intelligent machine that integrates a number of computers dedicated to sensing-calculating-actuating, capable of decision-making in order to produce an interactive interface. The proposal uses built-in computational systems such as various kinds of sensors (especially in this thesis light sensor), RFID tag technology (Radio-frequency identification (RFID) is a technology that uses radio waves to transfer data from an electronic tag, called RFID tag or label, attached to an object, through a reader for the purpose of identifying and tracking the object.) behind walls to make architecture interactive.

These technologies are used not only for sensing inputs, but also as evaluators to decide the effectiveness of this architectural system to implement a ‘feedback loop’ . Several study methods in this thesis will examine how kinetic architecture can be more efficient than the static architecture, how to control the architectural machine in order to provide kinetic-interactive architecture. (For example below the illumination experiment’s detail is described) The tools used, especially Arduino (hardware) and Processing (software) will be discussed.

The aim is to have built-in computational flexibility in the building in the first instance, so that the system can achieve any required change through software update. It will be necessary to think more about the appropriate type of technologies for the four hierarchical levels defined earlier. Is the same kind of technology going to be used in each of the four levels? Otherwise, what functions would we like the Room to have and how are these going to “inform” the higher levels ?

3-2-3 . Programmable Matter for Architecture

The field called ‘programmable matter’ (Toffoli, 1991) is highly suggestive. There are several solutions to realize this depending on scale, one is material based programmable matter, such as complex fluids, metamaterials, shape changing molecules, others are robotics-based approaches, such as modular robotics, claytronics, etc. Researchers in this field also pointed out that “Scale is one key differentiator “ (Toffoli, 1991). When focusing on the molecular level such as a liquid crystal display, which is an established programmable matter example, the individual units of the ensemble can compute. The result of their computation is a change in the ensemble’s physical properties, by applying a current or voltage.

For example if a designer wants to use fabric for cloth, its grid should be smaller than a person’s scale and if it wants to keep ordinary functions, such as air for insulation, this homogeneous grid should be smaller than roughly 10mm to avoid circulation of the air.

Hence, in the architectural field, ‘programmable matter’ has to have architectural meaning, in particular here, this proposal will take a robotic kinetic methodology which means this project fits below the modular robotics approach, a typical component scale is roughly defined by a 100mm*100mm*100mm three dimensional grid, and the whole fabric will be 100m*100m to cover an urban patch as a fabric, because of its architectural function = as a structure which can integrate people.

As an estimated issues are below. Firstly, its actuator will be affected critically depending on scale. In the case of the Okayama competition, the actuator was an artificial muscle which was a kind of shape memory alloy. The strength of this muscle was strong enough for the 1/10 model while the strength was not strong enough for the actual 1/1 scale and there was no thicker wire. This resulted in a hold on this project. Secondly, a user (a human) already has his own scale; 1.7m height and his hand’s length is 40cm , but they cannot catch the earth or an atom. If this proposed fabric is used for furniture, I would think the component scale would need to be smaller than man’s body.

3-3. Engineering Tools

3-3-1 . Required Hardware

3-3-1-1 Actuator Properties

(Material, Power to sustain a force, Time response, Size, Weight, Max speed)

There is a new field called ‘Programmable Matter‘ which refers to matter that has the ability to change its physical properties (shape, density, moduli, optical properties, etc). And there are many ways to realise this idea in the field. The programmable architecture is deeply related to this programmable matter.

Referring to the matter, the hardware should be a grid, homogeneous component based along with sensors and actuators which can afford any needed demand. Parameters are not only for geometrical shapes but also for environmental transparency, such as transparent glass/ semi transparent glass/ milky white glass in terms of vision for instance.

The proposed model had some points, but the fabric could not hold its shape by itself without electricity. It was trouble. So then referring from Tristan d’Estree Sterk ‘s actuated tensegrity components, (from his thesis Using Actuated Tensegrity Structures to Produce a Responsive Architecture) , got idea to make the fabric more stable using another vertical member, in his thesis, it is used as an ‘actuator‘, but in here the vertical member is replaced just as a shape supporter because we already have actuator as a tension member, the point is we needs keep. In addition, this proposal will add a different actuator and more smart system using Arduino.

- Characteristic of the selected shape memory alloy

Shape memory alloy wire contracts like muscles when electrically applied. The ability to shorten in length approximately 5% occurs because both nickel and titanium atoms are present in the alloy in almost exactly a 50% 50% ratio which dynamically change their internal structure at certain temperatures. The movement occurs through an internal “solid state” restructuring in the material that is silent, smooth, and powerful. One of the possible applications is actuation using electrical heating. The spring can become a lightweight linear actuator requiring no other moving parts and has been used in robotics, pumps, window opening, locks. Many of the tasks done with small motors or solenoids can be done with muscle wire, which are smaller and less expensive.

Smart “NiTi” Spring (Manufacturer: RVFM) has been selected as an actuator of this thesis’ physical model. It is a 5.5mm external diameter spring made from Nickel Titanium alloy which has been heat treated to provide memory behaviour. Length is 20mm when closed and weight is 1.1g. From the manufacturer's product description, at room temperature the spring is soft enough to pull out to approximately 150mm by applying a small amount of force. When heated to 70°C by passing an electric current through it, the spring contracts to its original length with a pulling force equivalent to lifting a 0.5kg weight.

“The mechanism of actuation in shape memory alloys is a temperature-induced phase change which produces a significant shear strain on heating above the transformation temperature. This effect has given rise to a variety of applications (Duerig 1990). “(J. E. Huber, N. A. Fleck and M. F. Ashby, 1997)

“High values of σmax (up to about 7×108 N m−2) and εmax (up to about 7×10−2) can be achieved in nickel–titanium alloys of approximately equiatomic composition.“(J. E. Huber, N. A. Fleck and M. F. Ashby, 1997)

Manufacturer’s technical data sheet gives the characteristic of the shape memory alloy spring. Overheating the spring may destroy the memory and may ignite adjacent materials so the electricity passed through the spring is to be kept not exceeding 3A. For example, a 6v lantern battery connected across the two ends of the extended spring will supply sufficient current for the spring to contract. The resistance of the spring is quite low, therefore the battery will discharge quickly if left connected. The spring is not suitable to be soldered therefore has to be joined mechanically to any leads. A terminal connecting block provides a simple connection when the two extremities of the spring are straightened.

A bias spring or mass may be used to extend the smart spring relaxed at room temperature. When electricity is applied, the contraction force of about 10 Newtons will overcome the bias and do any additional work for actuation. In this cycle the rate of relaxation is slower than contraction but can be speeded up by cooling (e.g. moving surrounding air with a fan).

As noted below the author reviewed performance of the actuator properties against the criteria described in the paper ”The selection of mechanical actuators based on performance indices” by J. E. Huber, N. A. Fleck and M. F. Ashby in 1997.

- Highest power and displacement

“At low frequency, hydraulic and shape memory alloy systems have the highest values of volumetric power, which is consistent with their high values of volumetric stroke work. ” (J. E. Huber, N. A. Fleck and M. F. Ashby, 1997)

“To minimize the volume, the product σmaxεmax must be maximized; σmaxεmax is the performance index for this problem. Figure 6 suggests that hydraulic systems or systems based on shape memory alloys would be selected where force and displacement are the only criteria .If the mass of the actuator were to be minimized, σmaxεmax/ρ would become the performance index. “(J. E. Huber, N. A. Fleck and M. F. Ashby, 1997)

fig 3-3-1-1,1: Actuation Stress, versus actuation strain for various actuators

(Refer from The selection of mechanical actuators based on performance indices)

As referred above, the paper assessed the Nickel-Titanium alloy has the highest value of volumetric power at low frequency and has the highest force and displacement ratio compared with the other means of actuation.

- Cycle Operation

“For actuation, shape memory alloys are often used in the form of a wire or foil which reduces in length when heated, and can be returned to its original length by cooling and then stretching. “(J. E. Huber, N. A. Fleck and M. F. Ashby, 1997)

“Heating can be achieved by electrical resistance in shape memory alloy wires, with the resulting tensile forces providing a single acting actuator. “(J. E. Huber, N. A. Fleck and M. F. Ashby, 1997)

“Shape memory alloys typically provide a single action and require an external system to reset them for cyclic operation.”(J. E. Huber, N. A. Fleck and M. F. Ashby, 1997)

“In the present analysis, the resetting mechanism is assumed to be a second shape memory alloy actuator—alternative resetting mechanisms such as springs or weights are also common. “(J. E. Huber, N. A. Fleck and M. F. Ashby, 1997)

In my physical model a spring is used to reset the force. The movement of the actuator is measured, in part, by the level of stress one uses to reset the wire, or to stretch it in its low temperature phase. This opposing force, used to stretch the wire for cycle operation, is called the bias force. In my physical model, the bias force is exerted on the wire constantly by the spring, and on each cycle as the wire cools, this force elongates it. If no force is exerted as the wire cools, very little deformation or stretch occurs in the cool, room temperature state and correspondingly very little contraction occurs upon heating.

According to the manufacturer, up to a point the higher the load the higher the stroke. The strength of the wire, its pulling force and the bias force needed to stretch the wire back out are a function of the wire size or cross sectional area and can be measured in pounds per square inch or “psi”. If a load of 5,000 psi (34.5 MPa) is maintained during cooling, then about 3% memory strain will be obtained. At 10,000 psi ( 69 MPa), about 4% results, and with 15,000 psi (103 MPa) and above, nearly 5% is obtained.

fig 3-3-1-1,2: Normal Bias Spring

(Refer from “Data Sheet of the Smart ‘NiTi’ Spring”, rapid online)

fig 3-3-1-1,3: Shape memory alloy (NiTI) in the model

The picture shows the connection joints and each part, SMA, lods, membrane, and wire.

3-3-1-2 How to Sense

fig 3-3-1-2,1: The Screenshot of Sensing with Arduino (K.Hotta)

This is an actual interface of Arduino. By using serial transfer, the sensor will give us a certain values as realtime

fig 3-3-1-2,2: The Screenshot of Sensing with Arduino part2 (K.Hotta)

This is an actual interface of Arduino Board and software Arduino.

3-3-2 . Required Software

Consideration must be given to three aspects of the software – the control program, the human interface and multiple user participation. The control program has to manage the conflict between individual users and the central control system. A two way system with an emergent element with input from bottom up, and central control element with input from the top down is proposed. The superiority or inferiority of each element is set at a threshold or weighted. The global form is a result of the totality of local decisions with consideration given to each component’s reflection potential. Each component’s control system is a simple piece of code calculated locally to reduce the overall amount of system calculations. Each component is connected with its neighbours, so they can make a network.

As a result they can achieve higher intelligence. An example of this program was prepared for Okayama-LRT competition 2010 by K.Hotta and A.Hotta. Using spring systems, the white circles (=agents) were continually changing their position depending on the weight, in other words, attracting and repelling forces. For example according to the table, the square and a parking space should be close because of traffic convenience while the bus stop and rest room should be distant from each other to prevent noise. In this program, a function is drawn as a white circle and its influences are visualised as lines which will converge at balance points. The spring system is simple yet it is able to optimize the problem (higher intelligence).

The second aspect of the software is the human interface. In processing, there is a famous plug-in library to make a GUI which is controllable by mouse, called ‘control-p5’. Using this tool, it is possible to make a convenient, beautiful interface to parametrically control the architecture which allows user participation in controlling the architecture’s character. It is possible to further develop this interface to be wireless by using touch OSC. (TouchOSC is a modular OSC and MIDI control surface for iPhone / iPod Touch / iPad. It supports sending and receiving Open Sound Control messages over a Wi-Fi network using the UDP protocol and supports both CoreMIDI and the Line 6 MIDI Mobilizer interfaces for sending and receiving MIDI messages.)

Third aspect of the software to be considered is an agent-based system for multiple user participation. Basically, inputs are made not only through environmental stimuli but through human participation as well. The environmental stimuli is measured via sensors while the human input is transferred via Internet and php protocols though the above interface technique.

3-3-2-1. Rhinoceros

Rhinoceros is a two and three dimensional CAD (computer Aided Design) software package, developed by Robert McNeel & Associates in 2000. It is commonly used to draw free-form surfaces not only for architectural computer graphics modelling, but also in other product design fields. A key feature is NURBS(Non-Uniformed Rational Basis Spline, NURBS defines forms by control point) : this technology was developed in French car engineering in the 1950’s. This flexibility meant that one could define every property of a curved surface by a 3 dimensional coordinate system. Another function which facilitated its broad use was its support of many kinds of formats for export and import. It helps in adapting drawings to other CAD software and documentation.

fig 3-3-2-1,1: The Screenshot of Rhinocerous (K.Hotta)

This is an actual interface of Rhinocerous.

3-3-2-2. Grasshopper

'Grasshopper' is a software plug-in for 'Rhinoceros' released in 2007. The developer is Robert McNeel & Associates with David Rutten doing most of the development. A common use is graphical algorithm editing using a component based interface which stores the algorithms. This technology was adopted as it allows several algorithms to be used dynamically and co-instantaneously. This facilitates dealing effectively with large data sets or in shape simulations which are very difficult to calculate by hand. It is also useful in facilitating environmental design analysis. For instance sun light simulation, wind simulation and thermal environment.

fig 3-3-2-2,1: The Screenshot of Grasshopper (K.Hotta)

This is an actual interface of Grasshopper in Rhinocerous.

3-3-2-3. Galapagos

'Galapagos' is one of the components inside 'Grasshopper' in Rhinoceros. The developer is also Mr. David Rutten and it has been in development since 2010. A common use of this component is mathematical optimization by using an evolutionary solver (based on a Genetic Algorithm), and a simulated annealing solver. The key benefit here is the ease of use of the component. Optimization solvers are difficult to script, especially for designers, but this one comes ready-made.

fig 3-3-2-3,1: The Screenshot of Galapagos(K.Hotta)

This is an actual interface of Glapagos inside Grasshopper in Rhinocerous.

3-3-2-4. Kangaroo Physics

'Kangaroo Physics' is the plug-in software for 'Grasshopper' in 'Rhinoceros'. Kangaroo is a Live Physics engine for interactive simulation, optimization and form-finding directly within Grasshopper. Physics engine can calculate gravity mass, internal mass, velocity, friction and airflow within the model by using a particle system. The particle system is an abstraction of the real world. However it has a high level of corroboration, the simulation yielding approximately-similar results to real world tests. The developer of this add on is Mr.Daniel Piker. The manual (Daniel,P. 2011) suggests this plug in 'can be used for various sorts of optimization, structural analysis, animation and more.'

fig 3-3-2-4,1: The Screenshot of Kangaroo Physics (K.Hotta)

This is an actual interface of Kngaroo Physics inside Grasshopper in Rhinocerous.

3-3-2-5. Processing

Processing is an open source programming language and an integrated development environment (IDE) built for various types of designers. This project was started from 2001, by Casey Reas (MIT), Benjamin Fry (MIT). Common uses are for visual design, and real-time visual feedback, and it works with simplified Java script. Other popular functions include the way it allows multiple input devices in addition to a mouse such as a microphone and camera. OpenGL, GPL(General Public License).

fig 3-3-2-5,1: The Screenshot of Processing (K.Hotta)

This is an actual interface of Processing.

3-3-2-6. Arduino

Arduino is an open source software and hardware micro-controller. Arduino was started in 2005 as a project for students at the Interaction Design Institute Ivrea in Ivrea, Italy. This micro-computer can be controlled by external software in a host computer such as Processing or Adobe flash. Because it is open source, anyone can assemble the hardware for this micro-controller based on the design information in the easily usable graphical layout editor (EAGLE). In this experiment, Arduino mega was chosen because of the number of 52 I/O pins and the extensibility of the external power supply.

fig 3-3-2-6,1: The Screenshot of Arduino and ArduinoMega Board

This is an actual interface of Arduino, and the picture of the ArduinoMega board (K.Hotta, pics from

3-3-2-7. Traer physics

Traer physics is a simple particle system physics engine for Processing. A purpose of adopting the technology is to make particles, apply forces and calculate the positions of particles over time in real-time. Developer is Dr.Traer Bernstein. There are four parts one is 'ParticleSystem' - takes care of gravity, drag, making particles, applying forces and advancing the simulation. One is 'Particles' - they move around in 3D space based on forces you've applied to them. One is 'Springs' - they act on two particles, the last is 'Attractions' - which also act on two particles. New Features of current version are, Choice of previous Runge-Kutta or faster but less stable, Modified Euler integrator, Make your own custom forces, Remove springs and attractions, and Open source.

(Reference from

3-3-3 . Brief Introduction and of Genetic Algorithm

This thesis is not only about Genetic Algorithm (GA), but also about suggesting a modified hard-software system for higher adaptation. If this algorithm is focused on too carefully, it would become a whole other paper, therefore it is explained only in brief. “A GA is a search heuristic that mimics the process of natural selection. This heuristic (also sometimes called a metaheuristic) is routinely used to generate useful solutions to optimization and search problems.” (Mitchell, 1996) GA is included in the upper hierarchy evolutionary algorithms (EA), which generate solutions to optimization problems encouraged by natural evolution, such as inheritance, mutation, selection, and crossover. In bioinformatics, phylogenetics, computational science, engineering, economics, chemistry, manufacturing, mathematics, physics, pharmacometrics and various other fields, GA have been applied.

3-3-3-1. History of Genetic Algorithm

In this section, the history of evolutionary algorithms including GA is described briefly. GA with Evolutionary solver is not new to today (2013). The first reference to this field is indeed Charles Darwin. However, according to this program's creator: Davit Rutten , Lawrence J. Fogel first introduced this to the computer field in the 1960’s as the landmark paper “On the Organization of Intellect”. Then in the early 70′s, it is said that Ingo Rechenberg and John Henry Holland(1975, Michigan university) developed it. Until Richard Dawkins’ book “The Blind Watchmaker'' in 1986, Evolutionary Computation didn’t attain popularity with the lay person, only with the programmer world. In this book he developed a small program called Biomorph; a word invented by Desmond Morris, that generated a seemingly endless stream of body-plans artificial selection. Since the 1980′s the diffusion of personal computers has made it possible for individuals to apply evolutionary principles to personal projects, without resources such as government funding. They have since made it into common parlance. On the 3d CAD field, ‘Galapagos’ is the Grasshopper plug-in on Rhinoceros was released in 2010, targeting mainly non-programmers such as architects, designers.

3) In GA, even intermediate answers can be yielded at practically any time, because its run-time process is progressive. GA output is a never ending stream of answers, dissimilar to a number of dedicated algorithms. Though the newer answers tend to have higher quality, it can provide the answer nonetheless. So even a premature run can bring a harvest of sorts which could be called a result. This could be a great benefit to real time usage.

4) GA including its evolutionary solvers are manipulate-able by the user. There are a number of opportunities for interaction between algorithm and human: user or controller, because the run-time process is highly transparent and brows-able. It is important to note that the GA and its solver can be coached by human intelligence, even in the middle of its process, sometimes interrupting the process. Therefore it can be goaded into exploring not only normal optimum answers but also sub-optimum branches, if needed.

Secondly, as every approach has drawbacks and limitations two weak points, i.e. disadvantages are raised here.

Evolutionary Algorithms including GA are slow; takes a long time to get a beneficial result, although it depends on the settings such as fitness pressure. Chiefly complicated setups that require a long time in order to solve a single iteration will quickly run out of hand, moreover it is not unheard of that a single process may run for more than a day. According to Davit Rutten, who is the creator of Galapagos it needs at least 50 generations of 50 individuals each, which is almost certainly an underestimate unless the problem has a very obvious solution. Then he would take a two-day runtime. For example this 4 components iteration needs 3 minutes to get an almost ultimate result.

Evolutionary Algorithms including GA do not pledge a solution to the user. Except when a predefined ‘good-enough’ value is specified, the process will tend to run on indefinitely. The technique is required to have a better objective function (evaluating function). When evaluating functions are badly defined, GA’s solver may not be able to recognize the answer; best optimal solution, or simply never reach it.

3-3-3-3. The Basic GA Procedure

Here, the brief procedure of Genetic Algorithm (GA) will be explained. There are broad considerations and chronicles in both field computation and biology or even philosophy referring to these topics. However here these are briefly touched because this thesis is not about GA itself, but a whole new system.

fig 3-3-3-3,1 Basic GA procedure

3-3-3-3-1. Step-1) Generate Initial Group

At the initial point, there is nothing inherent from the previous session, when a GA starts. Initial generation, called generation-0 has to be generated, thus the common way is to randomly spread individuals on the fitness landscape. This inevitably is guesswork. Since all the genomes in generation-0 were picked at random, it is quite improbable that any of them will have hit the rewards. Basically here the number of genomes is equal to the number of dimensions of the fitness landscape. When the user increases the number of genomes, it dramatically reduces the probability of accidental good luck, such as achieving the skirts of the highest mountain, which has optima on the top. Otherwise, when not enough diversity is provided in generation zero, they can easily fall down to local optima, and then it is not able to achieve a true optima answer. Picking the best performing genome from the initial population is not enough but it needs to breed the best performing genomes in Generation zero to create Generation One. The contiguous generations are created from previous generations using a combination of breeding and selection and mutation algorithms.

3-3-3-3-2. Evaluation

When the initial candidate genomes (when the first iteration) or inherent genomes (for returning cycle) are bred, their offspring will show up somewhere in the intermediate model-space. This model space is commonly represented as a 3 dimensional graph; fitness landscape. As the solver starts it has no idea about the actual shape of the fitness landscape. So the second step of the solver is to populate model space with a random or not random collection of individuals and its genomes, thus exploring fresh ground.

fig 3-3-3-3-2,1 Fitness landscape

This is the typical expression of Fitness Landscape, where vertical axis shows fitness, and horizontal 2 axes’ shows projected space. In this space, every dot is distributed into this rectangle space, but it is unique. It is possible to specify the exact place only in the case when genomes are only two, as two changeable variables; one axis indicates gene-0, other indicates gene-1, because the number of genomes is equal to dimensions.

However usually the number of genomes is more than two so then the actual field space will be more than 3 dimensions, which is difficult to show on paper, even on a 3 dimensional space. The common way to represent this space is to project into 3 dimensional spaces (such as fig 5-3-2). Every combination of genomes results in a specific point in space and indicates particular fitness, as high or low in Z direction index. The algorithm attempts to search the highest peak in this landscape, this is called optimum.

As fig 5-3-2 shows as colored dots, once the system knows how fit every genome is it can make a hierarchy from fittest to lamest. Essentially the system is searching for a ‘high place’ (When searching for maximum value on the evaluation function, otherwise ‘low place’ instead) in the landscape and it is a reasonable assumption that the higher genomes are closer to potential high-ground than the low ones. Therefore the system will kill off the worst performing ones and focus on the remainder, this however, is the next step; selection.

In Terms of 'Fitness function', ‘Fitness’ often causes great distress in biology, having been discussed for a long time. It is said generally “Fitness is the result of a million conflicting forces “or “Evolutionary Fitness is the ultimate compromise. “ But the formal definition of ‘Fitness’ is “If p is probability that an organism at the egg stage will reach adulthood, and e is the expected number of offspring that the adult organism will have, then the organism’s overall fitness is the product pe ”(Sober, 2001).

However, Instead of accurate discussion in biology, here ‘fitness’ is just focusing on the computational side. At least in Evolutionary Computation, fitness is a very easy concept. Fitness, also fitness function, is a particular type of objective function that is used to summarize, as a single figure of merit, is how close a given design solution is to achieving the set aims(Nelsona et al., 2009). In general terms, fitness is how close the user (here user means the person who is handling GA) wanted it to be. When the user is trying to solve a specific problem, and therefore he/she knows what it means to be fit. The fit individual has features as described below. The fitter individual can produce more offspring than the unfit one on average. So there is an interrelation between fitness and the number of offspring. Or, it is also a possible alternative to count the number of grand-offspring. And a better measure yet would be to count the allele frequency in the gene-pool of the genes that make up the individual in question.

fig 3-3-3-3-2,2 Climbing up fitness landscape

When a specific genome is taken, it ends up somewhere on the fitness landscape. Because of the fitness landscape merely the project picture of higher dimensional model-space, it is quite unlikely to sit on the part of the mountain in landscape. Also the estimated route of evolution is quite difficult to predict, because the operation consists of some randomness. However it could be said there is a general tendency that can be distinguished, when the selection works correctly. The behavior of genomes over generations is similar to that water will always flow downhill along the steepest slope (fig 3-3-3-3-2,2). If the algorithmic system, especially narrowing down function, works correctly, genetic descendants will generally climb up the mountain basically along the steepest slope. Because the system has been set as the higher fitness rewards, every individual tries to maximize its own fitness. Most effective mountain climb is taking the steepest line towards high fitness.

On the figure, colored circles represent the location of the ancestral genome, the line track represents the pathway of the offspring. Executing the algorithm a number of times is actually equal to interact genomes with landscape. By using a technique such as crossover, which is explained later, every genome climbs up the hill. Every peak in the fitness landscape has an upside down bowl of attraction around it. Then those surfaces turn to the valley. Some sections of this landscape represent the trace of genomes in model-space that will converge upon that specific peak of mountain or hill. The shape of this bowl or steep of the sloop is dependent on the way the user set a fitness function.

When the solving questionnaire is easy, the landscape may be a craggy mountain. In case of a difficult problem the landscape may be smooth and it may be difficult to find a hilltop. With this technique, in the case of a typically difficult question to solve, the solution tends to get stuck in local optima. However this sort of problematic fitness landscape could be manipulated by some techniques.

fig 3-3-3-3-2,3 Difficult Fitness Landscape

The fitness landscape can be of various types (fig3-3-3-3-2,3). The easiest example is shown as ‘Simple’ on the figure, which consists of a clean mountain and valley. Those are corresponding with maxima and minima respectively. The next example, ‘Basin’ is bit arduous to achieve the final answer, as the local optima tend to fall down on the basin. Before arriving at this plateau, direct searching and then improving fitness was smooth, but after that the solver tends to lose direction. Therefore the concluding answer tends to fall into ambiguity within the certain width. An even worse example, called ‘Flip’ here, would be the following fitness landscape: the peaks are restricted to quite a narrow term. Therefore it is easy to miss random sampling of the landscape. In concrete, somehow a successful genome finds the second highest peak on the left; its offspring will rapidly populate the local optima.

This immediately causes the extinction of the rest of the population and then the algorithm will miss the chance to find a higher peak on the right. The sharper the landscape for a solution, the harder it is to solve a problem with GA. Another difficult case is called ’discontinuous’ in the figure. Because most of the part consists of horizontal patches, there are no peaks. It causes difficulty in searching the usual way, thus there is continuous improvement through solver on this plateaus.

When the genome encounters this flat part, it will lose the compass and thus even after a few generations, nothing changes. This is equal to no fitness pressure. Until the genome comes across the upper plateaus accidentally, most of the dominating genes are meaningless ones. The final worst case is named as ‘Noise’ here shows a spiky landscape. Even though the solver might achieve the top of a spike, after making the crossover it may suddenly lose their fitness. The reason is that GA attempts to proceed by guessing the approximate right direction. But in this case they suddenly fall down into a crevasse. This sort of landscape makes this algorithm null.

fig 3-3-3-3-3,1 : The Diagram of Different Selection Methods

Isotropic Selection, is the simplest kind of procedure in selection. Basically every candidate can get a mate, but has certain functions; it makes the speed, with which a population runs uphill on fitness landscape, and slows down. It therefore avoids the premature colonization of a local optimum. Not depending on the place of the individual on the genome map, which is equal regardless of fitness, his/her chance to have a mate is constant. When everyone has a chance, this selection strategy may be seen as pointless, because this does not affect the gene-pool. However in nature representative examples exist, such as wind-pollination or coral spawning, or females in a walrus colony. In a colony, every female gets to breed with the dominant male, no matter how fit or unfit she is.

Exclusive Selection is the selection method in which only the top N% of the population gets to mate. N is arbitrarily numbered from 0 to 100. When the individual is within N%, he/she can get offspring or multiple offspring. This method obviously affects the gene-pool by improving their fitness, because only fit individuals can reproduce. It is possible to find instances of this in nature, that is Walrus males. Only a certain percent of males can have a harem, the flunkies’ just stay inside without any opportunity to breed.

Biased Selection, which is when the chance of mating increases as the fitness increases, is another common method in nature. This is something typically seen in species that form stable couples. Essentially everybody has the capability of finding a mate, but the more attractive individual has a higher probability of breeding, thus increasing their chances of becoming genetic founders for future generations. To manipulate and control the evolutional direction, power functions are sometimes used to amplify biased Selection. For example, when fitness is more important, exaggerate the curve by using multiplication with a number more than 1, otherwise on the other hand flattening it with less than 1.

There are several methods as a concrete algorithm. But here, ‘Roulette Wheel Selection’ is introduced as the most simple and common example. It is also widely known that the roulette method is not practical for some reasons, so when this is used several techniques such as ‘scaling’ or ‘tournament’ methods may also be implemented. This method is defined as selecting individuals following the fitness proportion. This is so named because this way is similar to the roulette on the darts game which has the fan-shaped target but here the target area relates to fitness.

3-3-3-3-4. Crossover

fig 3-3-3-3-4,1 : The Diagram of Crossover

The meaning of this process is to generate offspring, once a mate has been selected from a population. When two genomes are mated, it needs to decide what values to assign to the genes of the offspring. All the values between two numerical extremes can be assumed. Because genes in evolutionary algorithms are not always very similar to biological genes, the algorithmic variant is much simpler than biology. Ironically, biological genes are sometimes more digital than programmatic genes, indeed it depends on the way of taking the gene. The biological process of gene recombination is horrendously complicated and itself subject to evolution (meiotic drive) for example, where genes evolve to affect the random process of meiosis and thus improve their chances of ending up in offspring.

As Mendel discovered in the 1860′s, genes are not continuously variable qualities, but they behave like on-off switches. When Mendel crossed wrinkly and smooth peas, he ended up with specific frequencies of each in the subsequent generations, but it never revealed peas whose skins are somewhat wrinkled or smooth.

Crossover is one of the methods of Coalescence. It is said that a somewhat similar mechanism is working in biological recombination as well. Crossover is best suited for when two candidates are already quite similar; it is effective in climbing up a fitness hill.

There are no gender or sex-based characteristics in the solver. So the combinations of two genes are potentially a completely symmetrical process. In Crossover mating, junior inherits some number of genes (there are a number of ways to exchange genes, such as one point crossover, multipoint crossover, uniform crossover) from one and the remainder from the other. Because of this mechanism gene value is maintained.

Blend Coalescence (including blending preference) is another method for coalescence. For this, new values are computed for genes instead of duplicating existing genes. The most simple example of logic is averaging the values, but others have biased percentages in those interpolations. The latter is called blending preference, usually used with relative parental fitness. When one parent is fitter than another, the former's gene will take a higher proportion in their offspring. This operation is quite natural to get fitter descendants. However, this is not entirely without precedent in biology, it depends to some extent on what level of scale you define as ‘gene’.

3-3-3-3-5. Mutation

fig 3-3-3-3-5,1 : The Diagram of Mutation

Mutation is the only mechanism, which can increase diversity in the population. All the mechanisms, which have been discussed such as Selection, Coupling and Coalescence, are basically designed to narrow down an answer within one fitness mountain.

3-3-3-3-6. Re-generation and Repetition

The system now has a new population, which is no longer completely random and which is already starting to cluster around the fitness ‘peaks’. All it has to do is repeat the above steps: kill off the worst performing genomes, breed the best-performing genomes, until it reaches the highest peak.

Fitness Pressure

On GA, Fitness Pressure is an indication of how strongly it pushes the Solver into a specific

direction. As a result of the explained method, Fitness pressure is determined. Generally, higher pressure makes the calculation time quicker. The Genomes tend to climb up to certain peaks with minimal path, but very high pressure is not necessary for healthy execution as it tends to reduce the diversity in the gene-pool. On the other hand, a very low pressure could also be precarious as it allows the population to spread out. In the extreme case, pressure of 0 is definitely harmful, as the genomes lose their direction of travel, and then the random drift starts to counteract the progress of the algorithm.

3-4 . Conclusion

The ideas pulled together in this chapter have been developed by a number of experts in various academic fields. However their use in combination is rare highlighting the cross-domain nature of Cybernetic Architecture.

The recent technological developments both in hardware and software computing have been numerous. By using these philosophies and tools, a modified proposal is presented in the next chapter.