Statement of the Art/Background

2-1. Introduction

In this chapter, various precedents are considered both in reviewing what has lead up to the contemporary situation as well as the latest developments. These include developments from different academic fields such Architecture, Engineering, Computer Science, Psychology, and Art. The precedents are used, not only as case studies, but their methodologies can be applied to 'Programmable Architecture'-the theme of this thesis.

2-2. From Architecture

2-2-1 . Cedric Price and the Japanese Metabolism Movement

With reference to temporal design linked with the idea of emergence it is worth looking at the Fun Palace Plan, Plug-in City in the United Kingdom, and the Metabolism Movement in Japan in the 1960s and 70s (Lin, 2010). The emergent idea in this context is that the architect and designer designed ‘systems’ rather than depicting static images. This in turn led to an unexpected range of behaviours even after the building was built. The actual user would control how to use the structure. For example in a capsule based plug-in system the residents would decide how to manage the capsules or components. From the outside this development looks like a living system which is acquiring emergent behaviour. However, so far these movements have not been successful in solving social and architectural problems. Because of various limitations such as cost, a building’s lack of portability, and a lack of an evaluation system (explained below), Metabolism never achieved large scale success.

These ideas first suggested by Cedric Price (Price, 1969) at the time of Britain’s total dissolution of the planning system in the late 1960’s, were clearly unworkable. Already in the 1960’s in his counterattack to planning orthodoxy in his article ‘Non-plan’ (Price, 1969), and in his article ‘Activity and Change’ (Price, 1962) he shows an awareness of ‘time’.

“An expendable aesthetic requires no flexibility in the artefact but must include time as an absolute factor. Planned obsolescence is the order within such a discipline - shoes; motor cars; magazines.”(Price, 1962)

His time-based urban interventions have ensured that his work has an enduring influence on contemporary architects, though he built little. He frequently used the phrase ’Do you really need a building?’ rather than ‘What kind of building do you need?’. These stories reveal that his architectural concept is concerned with human activities.

Fig 2-2-1,1 Fun Place plan by Cedric Price

One of his famous unbuilt works ‘The Fun Palace’ in 1961, initiated with Joan Littlewood, established him as one the most innovative architects of the period. Based on the slogan ‘laboratory of fun', the idea was to make facilities for dancing, music, drama and fireworks. Central to Price's practice was the belief that through the correct use of new technology the public could have unprecedented control over their environment, resulting in a building which could be responsive to visitors' needs and the many activities intended to take place there. Using an unenclosed steel structure, fully serviced by travelling gantry cranes the building comprised a ‘kit of parts': pre-fabricated walls, platforms, floors, stairs, and ceiling modules that could be moved and assembled by the cranes. Virtually every part of the structure was variable. As the marketing material suggested, there was a wide choice of activities.(Drawing from Canadian Centre of Architecture.:

“Choose what you want to do – or watch someone else doing it. Learn how to handle tools, paint, babies, machinery, or just listen to your favourite tune. Dance, talk or be lifted up to where you can see how other people make things work. Sit out over space with a drink and tune in to what's happening elsewhere in the city. Try starting a riot or beginning a painting – or just lie back and stare at the sky."(From his drawing)

"Its form and structure, resembling a large shipyard in which enclosures such as theatres,

cinemas, restaurants, workshops, rally areas, can be assembled, moved, re-arranged and scrapped continuously," promised Price.(Design Museum, 2013)

Although never built at this scale, Price eventually put these ideas into practice in a reduced scale at the 1971 Inter-Action Centre in the Kentish Town area of north London. The building constitutes an open framework into which modular, pre-fabricated elements can be inserted and removed as required according to need. Central to his thesis that a building should only last as long as it was useful, the centre was designed on the condition that it had a twenty year life span and it was accompanied by a manual detailing how it should be dismantled. For Price, time was the fourth spatial dimension, length, width and height being the other three. Price’s architectural philosophy is not about the finished building but more about an ability to enable and facilitate change in a changing world and to allow us to think the unimaginable.

Fig 2-2-1,2 The system drawing of Fun Place plan by Cedric Price (Drawing from Canadian Centre of Architecture.:

It is very rare to have system = cybernetic diagram in the architectural drawing.

Price’s philosophy influenced a number of buildings including Richard Rogers and Renzo Piano's early 1970s project, Centre Georges Pompidou in Paris. During the 1970s in Japan the Metabolist movement (Lin, 2010) had a certain presence in the development of architecture and urbanism, but its influence is difficult to trace beyond academia, and surprisingly little of its influence is visible in the urban fabric of cities today.

Fig 2-2-1, 3 Centre Pompidou by R. Piano and R. Rogers (image © katsuhisa kida /FOTOTECA

image courtesy of royal academy of art,

Centre Georges Pompidou commonly shortened to Centre Pompidou; also known as the Pompidou Centre is a complex building in the Beaubourg area of the 4th arrondissement of Paris, near Les Halles, rue Montorgueil and the Marais. It was designed in the style of high-tech architecture by the architectural team of Richard Rogers and Renzo Piano, along with Gianfranco Franchini. It houses the Bibliothèque publique d'information (Public Information Library), a vast public library, the Musée National d'Art Moderne, which is the largest museum for modern art in Europe, and IRCAM, a centre for music and acoustic research. Because of its location, the Centre is known locally as Beaubourg. It is named after Georges Pompidou, the President of France from 1969 to 1974 who commissioned the building, and was officially opened on 31 January 1977 by President Valéry Giscard d'Estaing. The Centre Pompidou has had over 150 million visitors since 1977.

Metabolists were unique pioneers, who tried to incorporate temporal design into the planning process. Temporal design is defined as a design methodology whose processes are time-based. Usually a design is done as a static image which one then tries to implement to achieve the original depicted image. By contrast this temporal design methodology was concerned about the whole life of the building - its construction process, its use as a building, and ultimately its collapse. To ‘metabolize’, in this context, is defined as a procedure that classifies the material properties of architectural elements according to their life cycle. Once this life cycle is completed, the component or element is removed and a new one is plugged in. Since then the interest in and need for the design and construction of intelligent buildings and their urban aggregation into a metabolic system for cities has increased. Social contexts have changed radically, many new materials have come into production, cities have expanded and computational resources and processes have increased by several orders of magnitude. In addition, there has been an increasing pressure to address sustainability issues such as carbon footprint, energy waste, etc. The question which arises is: “what is the appropriate model for urban Metabolism and what are the means of implementing intelligent and responsive buildings within urban metabolic systems?”

Metabolists were unique pioneers, who tried to incorporate temporal design into the planning process. Temporal design is defined as a design methodology whose processes are time-based. Usually a design is done as a static image which one then tries to implement to achieve the original depicted image. By contrast this temporal design methodology was concerned about the whole life of the building - its construction process, its use as a building, and ultimately its collapse. To ‘metabolize’, in this context, is defined as a procedure that classifies the material properties of architectural elements according to their life cycle. Once this life cycle is completed, the component or element is removed and a new one is plugged in. Since then the interest in and need for the design and construction of intelligent buildings and their urban aggregation into a metabolic system for cities has increased. Social contexts have changed radically, many new materials have come into production, cities have expanded and computational resources and processes have increased by several orders of magnitude. In addition, there has been an increasing pressure to address sustainability issues such as carbon footprint, energy waste, etc. The question which arises is: “what is the appropriate model for urban Metabolism and what are the means of implementing intelligent and responsive buildings within urban metabolic systems?”

Fig 2-2-1,4 Nakagin Capsel Tower by Kisyo Kurokawa (Pics from

The Nakagin Capsule Tower ( Nakagin Kapuseru Tawā) is a mixed-use residential and office tower designed by architect Kisho Kurokawa and located in Shimbashi, Tokyo, Japan. Completed in 1972, the building is a rare remaining example of Japanese Metabolism, an architectural movement emblematic of Japan's postwar cultural resurgence. The building was the world's first example of capsule architecture built for permanent and practical use.

fig 2-2-1, 5 :System diagram for Plug-in Architecture (drawn by K.Hotta)

The Metabolists’ approach lacks a means of evaluating and modifying architecture (buildings) after it is constructed, leaving no method for the reduction of architecture(buildings)

The most relevant point with regards to the current thesis, which provides the basis for the hypothesis, is that the Metabolists’ approach lacks a means of evaluating and modifying architecture (buildings) after it is constructed, leaving no method for the reduction of architecture(buildings) (fig2-2-1). For example, Metabolists considered ‘Growth‘, which means just adding rooms to a building with capsules, only in the context of the Japanese post war economic miracle from the 1950’s to the 1970’s. At that time Japanese leaders only focused on the increase and concentration of the population from a Utopian political view believing in impossibly idealistic schemes of social perfection. However, they did not concern themselves with the reduction or abandonment of units and whole areas. In other words, according to S. Hamano(Hiroki Azuma, 2009), they were missing a set of reduction rules which would create a means to judge a structure by its architectural and sociological aspects, analogous to the ‘Kill Strategy‘ used in selection (JH, 1975 (published 1992)) within the field of biological-computation especially in the Genetic Algorithms. It is true that they estimated and proposed future adaptability, but they did not design ‘the subject’. This raises the questions of who will take the initiative to handle this time-based evolutionary process? Will it be done by architects, governments, or users? This is why their system did not work well, although some public buildings have been metabolized (defined above), such as the National Museum of Ethnology, Osaka-Japan designed by Kisho Kurokawa.

2-2-2 . Criticism of Teleological Planning with A.Isozaki and C. Alexander’s idea

fig 2-2-2, 1 : A diagram of teleological planning (Drawn by K.Hotta)

The traditional planning concept (fig2-2-2,1), which involved first drawing an overall image of an orderly future and then implementing it incrementally, is no longer effective. Because designers already know the planning objective before it is built, it is easy to define and demolish the object after building it. Architecture is, however, somehow a dynamic event which has a time line rather than an object. What is really needed is not a controlled top down plan, but a process that begins with the parts and progresses toward a whole. In such a process, there is no overall image (picture). There is never completion. What is important is a process in which the parts are self-sufficient (every parts should have ability of be dynamically interactive ie. be sensing, thinking, and acting). Generally a design, especially if designed by a single individual designer, tends to have one goal as the establishment of order that is visually and logically complete. In such a case, the design is limited to visible outputs which look capable of controlling the system, including architectures and cities. This conforms to a rigid, hierarchical ‘tree-like ‘structure(C. Alexander, 1966). The spontaneous growth of real cities, on the other hand, has the character of a ‘semi-lattice’ (C. Alexander, 1966), constantly generating random relationships. This means that neither modern design methods, which set Utopia in the future, nor the so called city planning method, which is aimed mainly at controlling existing conditions, can be effective as long as they are based on a teleological structure. C. Alexander demonstrated logically the impossibility of planning; in his thesis “A city is not a tree“.(C. Alexander, 26 1966) He pointed out the implicit self-contradiction, that a single individual planning method cannot help but possess a ‘tree’ structure.

Mr A. Isozaki, was already aware of this issue which placed him in a critical position with respect to the Japanese Metabolist Movement. The problem is: the specialist’s design cannot help but be a teleological structure as opposed to an architect-less design. The Metabolists tried to avoid those specialist’s subjective design methods using autonomous and time-based design methods (Time-based design is the architectural system that has ability to change after being built), but they fell into the same trap, because they still left the subject as ambiguous with regards the actual growing process. They drew beautiful-utopian drawings but did not set rules as to who is going to control the structure after leaving the architect’s hands. True autonomous design should be handled by the user.

As a brief conclusion, neither functionalism nor top-down or bottom-up planning methodology by architects was able to work properly. There should be more participatory planning methods, an architect-less architectural system though not in the vernacular tradition, as B. Rudofsky (Rudofsky, 1964) insisted, but a more contemporary method using novel communication tools.

2-2-3 . A Shortcoming of Parametrisism

Parametrical control in architectural modelling has been adopted by many scholars, architects, and computer scientists. The word ‘Parametricism’ was first introduced by Patrick Schumacher in his book The Autopoiesis of Architecture (Schumacher, 2010b) . Not only his thesis but also his actual architectural methodology in his studio are built around digital-architectural design.

Fig2-2-3,1 One example of parametrical urbanism

by Ludovico Lombardi, Du Yu, Victoria Goldstein, Xingzhu Hu. Surpervised by Patrik Schumacher. (figure from

Even though Parametric design provides an initially dynamic response to a set of factors, such as effective shading for saving energy, the final product is a static solution that does not further adapt to either environmental changes or changes in the social context. If Parametricism is used appropriately, which means as a tool not used only for the desire for artistic uniqueness, it can provide one of the solutions to create pre-adaptive architecture (Pre-adaptive Architecture is defined as an architectural system which has dynamic simulation with changeable geometrical parameters for various types of optimization at the planning stage; before construction). However, this methodology is effective only at the planning stage of the process; once it is built, there is no way to revise it based on changes in its context. Even though there is a virtual software-based loop (fig 2-6-1) , the system cannot not use the feedback after the hardware is established, thus it will not be fully adaptive. If the required environmental or functional conditions changed, it would make it difficult to achieve the desired outcome. For example, if there was a building with one reading room, and this room’s function was required to change to that of a bedroom, the environmental conditions in this room also need to change; the sunlight should be fully screened, it should be quiet for sleep. Architecture cannot satisfy this requirement unless it can recognize time-based adaptability after construction.

There are, however, numerous possibilities here. If the components have a time-dependent adaptability, physically and digitally, true adaptability might actually materialize, at the building level and, in principle, also at the city level.

fig 2-2-3, 2 : A system diagram of Parametric Architecture (Drawn by K.Hotta)

Before it is built the architecture has adaptability with regards to various kinds of circumstances such as environment or cost etc. However after it is built, the building becomes a fixed shape. Then it does not have fluid-like adaptability, though it is optimised. There is no loop after it is built, it indicates architecture has lost adaptability.

2-2-4 .Three Realized Cybernetic Architecture Projects

fig 2-2-4,1: Blur by Diller and Scofidio, 2002, Expo.02 in Neuenburg, Yverdon, Biel and Murten

The Photo by Norbert Aepli (

The Blur Building was built for the Swiss Expo 2002 on Lake Neuchatel. It is an architecture of atmosphere, according to designer Diller & Scofidio (Diller, 2002). The whole structure uses traditional static tensegrity principles to support an open deck covered by a network of computer-controlled water nozzles. Water is pumped from the lake, filtered, and shot as a fine mist through 31,500 high-pressure mist nozzles, shrouding the structure with mist to produce a responsive building envelope (the response mechanism is explained below). This technique allows the size of the cloud, and therefore the size of the building envelope, to be directly related and responsive to the environmental conditions that surrounded the building. The aim of controlling the building in this way was to produce enough mist to cover the entire structure while not allowing it to drift or stray too far from the building (Diller and Scofidio 2002). “A smart weather system reads the shifting climactic conditions of temperature, humidity, wind speed and direction, and processes the data in a central computer that regulates water pressure.” (http://www.dillerscofidio. com). One parameter only is controlled, the density of mist. Sensors measuring wind speed and the natural air humidity were placed within the building and along the shoreline in order to collect the environmental data used to control the rate of mist produced by the computer-controlled nozzles. In addition several individually controllable zones exist, enabling portions of the building to be shrouded at different rates, because of multiple sensors and mist nozzles. This Object Oriented System is shown in figure 2-2-4,2. Although this architecture uses responsive technologies to control the size and shape of its envelope dynamically, which can be considered as a pioneering example of an event-based design concept, the control system is quite limited and not particularly interactive, especially in response to human input.

fig 2-2-4,2: The System Diagram of Responsive System

The diagram shows typical responsive system. ie. Blur

fig 2-2-4,3 :The Aegis Hyposurface by dECOi, 1999~2001

Pic from

The Aegis Hyposurface(1999~2001) is a dECOi project, designed principally by Mark Goulthorpe and the dECOi office with a large multidisciplinary team of architects, engineers, mathematicians and computer programmers, among others. This team included Professor Mark Burry, who was working at Deakin University at the time, along with various others from Deakin’s group, including Professor Saeid Navahandi and Dr Abbas Kouzani. This project was developed for a competition for an interactive art-work for the foyer of The Birmingham Hippodrome Theatre. It was devised for the cantilevered ‘prow’ of this theatre.

From an engineering point of view, the surface was built upon a framework of pneumatic pistons, springs, and metal plates, all of which were used to deform a façade-like surface (Liu 2002). Effectively, the piece is a facetted metallic surface that has the potential to deform physically in response to stimuli from the environment (movement, sound, light, etc). Driven by a matrix of actuators which consist of 896 pneumatic pistons, the dynamic-elastic ‘terrains’ are generated from real-time calculations. Behind the façade’s surface, many pneumatic pistons are attach to metal plates that form the wall surface. Springs are then attached to both the pistons and the static structural frame, helping to control the location of each piston by anchoring it to the frame. The proposed system has a dynamically reconfigurable surface capable of ‘real-time’ responsiveness to events in the theatre, such as movement or sound. A computer was programmed to fire each piston sequentially in order to produce a series of patterns that responded to environmental stimuli —sound being the particular stimulus used. The implicit suggestion is one of a physically responsive architecture where the building includes an electronic ‘central’ nervous system, the surfaces responding to any digital input (sound, movement, Internet, etc).

However, there are some limitations. One is the structural split between flexible and static parts; all structural loads are supported by a traditional, static framework. The dynamic quality of the building only relates to the skin, not the structure that supports it. A second issue is that this project does not provide any functional solutions such as an architectural facade or a shelter but, rather, it simply demonstrates a kinetic-system. Sterk (Sterk, 2009b) criticized this work stating that ‘Thus a responsive architecture that consists of a functional building envelope which shelters people from environmental loads by addressing the principles of cold bridging, rain screens, and the dynamic transferring of structural loads still needs to be resolved.” Finally the system is too simple to provide multi-layered interactivity because every controller is linked to a central computer. Hence it is not designed to provide higher intelligence or agent-based functionality.

fig 2-2-4,4 :The Sytem Diagram of Controlable System

ie. hyposurface

fig 2-2-4,5: Water Pavilion by NOX and Kas Oosterhuis

 (Pics from

The Water Pavilions consists of two parts, the Freshwater pavilion designed by NOX, and the Saltwater Pavilion done by Kas Oosterhuis (Oosterhuis Associates) in Holland, 1997(pic-14).

The aim is to educate and inform the public about the latest technical advances, as well as to celebrate water’s more sensuous properties. These examples propose some alternative ordering of sensors and actuators, and for their control as a responsive and interactive system. It is called ‘Liquid’ architecture (the description comes from the American pioneer in cyberspace architecture, Markos Novak(Schwartz)). In order to effect a continuous interplay between people and building, he wants a chain reaction that is constantly out of balance.

In the Freshwater Pavilion, in the absence of clearly definable floors and walls, people lose their balance and fall over; this new architecture demands a new kind of behaviour by the visitor. At the same time, however, the architecture in this example is driven by visitors; this is ‘the interactivity’. The Water Pavilion is the first very large and complex, fully interactive, three-dimensional environment ever built. It is more than a quasi-interactive environment where the user can only choose from a limited number of possibilities supplied by the designer. The software built into the Water Pavilion receives so many different sorts of inputs that even the designers cannot predict the results. Every moment is different and unexpected. This makes the Water Pavilion not just an experience but also an unparalleled testing ground for the study of interactivity.

In terms of hardware, in the Freshwater Pavilion there is no distinction between horizontal and vertical, between floors, walls and ceilings. This translates into a responsive interior that uses the advantages offered by virtual environments to produce programmable spaces. According to Lootsma and Spuybroek (ref-33), using this interactive system through a thousand blob trackers and light sensors, the architecture can provide a hybrid space between a virtual representation and the physical space. The key is the interactivity between these two.

Every group of sensors is connected to a projector that projects a ‘standard’ wire which together create a frame grid building and which translate every action of a visitor into real-time movements of (virtual) water. “The light sensors are connected to the ‘wave’. Every time one walks through a beam of infra- red light one sees a wave going through the projected wire frames . . . visitors can create any kind of interference of these waves”, according to Lars Spuybroek (Lootsma, 1997). There are three different outputs or systems: 1) projected animations, 2) a lighting system, and 3) an audio system. Each of these three systems is distributed throughout the building to make a rich, multimedia environment.

However, some unsolved issues are worth mentioning. Firstly, the physical form is fixed and non responsive, though some phenomenal visualizations attempt to be responsive and interactive. Spatial quality sometimes depends highly on the shape of building and especially the interior space is static, and again the structural framework is isolated from the interactivity. Secondly, the multi-layered interactivity can provide a number of programs, but it is random. From the users view, it is quite rare to come across useful functions accidentally in the random controlled interactivity. Ideally the functions should be programmable by the end user. In other words, the interactivity has been designed and fixed by the system designer and not by the user.

fig 2-2-4,6: The diagram of Distributed Control System

ie. Water Pavillion

2-2-5 . Nicholas Negroponte’s Idea

‘Responsive Systems’ have been developed in many fields including 1970’s architecture. The accepted definition of responsive architecture is that of a class of architecture or building that demonstrates an ability to alter its form, to continually reflect the environmental conditions that surround it.

The basic work of responsive architecture was done by Nicholas Negroponte at MIT, described in The Architecture Machine (Negroponte, 1970) and Soft Architecture Machines (Negroponte, N. 1975.). His research focuses on three main aspects.

First he looks at the importance of the interface between humans and the architectural machine. In an earlier book, (Negroponte, 1970) he provides a literature review on systems theory and probes deeply into the underlying issues in man and machine relationships and artificial intelligence. He also proposes that responsive architecture is the ‘natural product of the integration of computing power into built spaces and structures, and that better performing, more rational buildings are the result’ (Negroponte, N. 1975.).

Secondly he considers user participation looking towards open source buildings. In practice, user participation in architecture requires planning without designer before the building is built and control-ability by the user during the planning phase and even post-construction. Two aspects are key here, designing architecture that is ever changeable, and using a bottom up approach where the end user has the ability to change the building at every phase. In his second book he discussed fully the literature around user participation in design. He said that the ideal design is so seamlessly and well integrated that it is not possible to tell which partner contributed what. This also leads to highly creative and innovative results.

Finally he looks at computer personalization. Negroponte looks forward to the man-machine relationships becoming so personal that both man and machine can politely interrupt encrusted habits with fresh inspiration or a nudging reminder of higher priorities. Furthermore, he also predicts that the response pattern of a machine interacting with designers would differ, according to their temperament and culture.

However the point is that his thesis was written in 1970s, which means that it could not take into account the recent dramatic development of complex systems and their computing characteristics, robotics and their systems, and the spreading communication tools available to end users such as wireless internet or smart phones. He and his group, however, had magnificent foresight.

2-3-1-1. Subsumption Architecture (in Robotics)

Subsumption Architecture (SA) is the reactive idea used in artificial intelligence (AI) developed to determine robot behaviour. This word and field was invented by R. Brooks in 1986 (Brooks,R. 1986). This philosophy widely influences self-standing robots and real-time artificial intelligence.

SA consists of hierarchical modules, with each module containing a simple behaviour. The idea is to divide complex intelligence into simple modules as layer. Each layer is designed and implemented for a specific objective. The higher layer is the more abstract. This methodology is different from traditional techniques of artificial intelligence; Subsumption Architecture takes a bottom up approach.

R. Brooks made the insect type robot called ‘GENGHIS’ in 1991 (Brooks, R. 1991), based on this idea: Subsumption Architecture. As a development of this robot, the automated cleaner: ‘ROOMBA’ ( was realized practically in the consumer world. Also in the military field, the automated bomb remover: ‘Packbot’ ( is working in disinterred areas such as battlefields and earthquake zones.

Fig 2-3-1-1,1 : Layered control system

The author states “Control is layered with higher levels subsuming the roles of lower level layers when they wish to take control. The system can be partitioned at any level, and the layers below form a complete operational control system.” (P7, fig3, A robust Layered control system for a mobile robot, by R. Brooks)

Fig 2-3-1-1,1 : Genghis, one of Rodney Brooks’

“artificial creatures” stands over a real insect. Rather than build robots that mimic people, Brooks feels they should mimic insects, and be, in his words, “fast, cheap, and out of control.” (Photo by Louie Psihoyos | CORBIS,

Fig 2-3-1-1, 2 : Roomba is one of a series of autonomous robotic vacuum cleaners sold by iRobot.

Roomba was introduced in 2002. As of Feb 2014, over 10 million units have been sold worldwide. Roomba features a set of basic sensors that help it perform tasks. For instance, the Roomba is able to change direction on encountering obstacles, detect dirty spots on the floor, and detect steep drops to keep it from falling down stairs. It uses two independently operating wheels that allow 360 degree turns in-place. Additionally, it can adapt to perform other more creative tasks using an embedded computer in conjunction with the Roomba

Open Interface. (Pics from

Fig2-3-1-1, 3 : PackBot (pics from

PackBot is a part of a series of military robots by iRobot. More than 2000 are currently stationed in Iraq and Afghanistan, with hundreds more on the way. PackBots were the first robots to enter the damaged Fukushima nuclear plant after the 2011 Tōhoku earthquake and tsunami.

2-3-1-2. Swarm Robotics

Swarm robotics is a relatively recent methodology for a multiple robot system. Each individual robot consists of a simple unit when compared to a traditional intelligent machine. The collective behaviour emerges from the robot to robot interaction and the robot to environment interaction. The idea was developed from biological studies of insects such as ants as well as other natural phenomena. The emergent behaviour is observed in social insects: relatively simple individual rules will produce a large set of complex swarm behaviours. From this the field of artificial swarm intelligence gets its name. It concerns not only computer science but philosophy as well.

Fig2-3-1-2, 1 Symbrion is example of swarm robotics (pics from :

Symbrion (Symbiotic Evolutionary Robot Organisms) is a project funded by the European Commissionto develop a framework in which a homogeneous swarm of miniature interdependent robots can co- assemble into a larger robotic organism to gain problem-solving momentum. Project duration: 2008- 2013. One of the key-aspects of Symbrion is inspired by the biological world: an artificial genomethat allows it to store and evolve (sub)optimal configurations in order to achieve an increased speed of adaptation. The SYMBRION project does not start from zero, previous development and research from project I-SWARM and the open-source SWARMROBOT projects served as a starting point. A large part of the developments within Symbrion are open-source and open-hardware.

The research is divided into three main parts: the design of their physical bodies, the design of their control behaviours, and lastly the development of observational methods. In terms of their physical parts, the major difference from stand-alone (distributed) robots is the number of robots, requiring a degree of scalability. This physical robot must also have a wireless transmission device using transmission media and protocols such as radio or infrared waves, Wi-Fi, or Bluetooth. Most of the time, they use it locally. In terms of control behaviours, the system operates through constant feedback between members of the group. This communication is the key. The swarm behaviour involves constant change of individuals in cooperation with others, as well as the behaviour of the whole group. Lastly, to observe this phenomenon systematically, video tracking (though there are other ways: recently ultrasonic position tracking system was developed by Bristol robotics laboratory is an essential tool.

Generally the limiting constraints are the body’s miniaturization and its cost. A possible solution is simplifying the individual robots as the significant behaviour is located at the swarm level, instead of at the individual level. Though most efforts have focused on relatively small groups of machines, a swarm consisting of 1,024 individual units was demonstrated by Harvard in 2014, the largest to date. Further research about reliable prediction of swarm behaviour is expected, where only the features of the individual swarm members are assigned. One of the major potential applications of swarm robotics are distributed sensing tasks in micromachinery such as inside the human body. This is called nanorobotics or microbotics. Others future applications in the fields of mining and agricultural foraging are predicted.

2-3-1-3. Self-Reconfigurable Modular Robots

Self-reconfigurable modular robots are defined as autonomous kinematic machines that have variable morphology. "Self-reconfiguring" or "self-reconfigurable" means that the mechanism is capable of utilizing its own control system to change its overall structural shape. In the title "modular" refers to the ability to add or remove modules from the system, instead of an ordinal reference to a series of modules. To have an indefinite number of identical self-reconfigurable modules in a matrix structure creates the potential for a variety of functions. There are two typical methods of segment articulation: chain reconfiguration and lattice reconfiguration.

Fig 2-3-1-3, 3 Polybot by Yim et al. (PARC) ,

This is an example of Self reconfigurable robotics. "PolyBot is a modular, self-reconfigurable system that is being used to explore the hardware reality of a robot with a large number of interchangeable modules. PolyBot has demonstrated the promise of versatility, by implementing locomotion over a variety of objects. PolyBot is the first robot to demonstrate sequentially two topologically distinct locomotion modes by self- reconfiguration. PolyBot has raised issues regarding software scalability and hardware dependency and as the design evolves the issues of low cost and robustness will be resolved while exploring the potential of modular, self reconfigurable robots." ( refer from P1 Yim,M. et al. (2000) PolyBot: a Modular Reconfigurable Robot.)

In terms of the physical robot, compared with morphologically fixed robots, these robots can change their own shape by rearranging their connection of their parts deliberately. The objectives of this function are to adapt to new circumstances, to perform another task, or to recover from damage, etc. As to their actual morphology they can, for instance, assume a worm-like shape to crawl forward, a spider-like shape to walk on a rough road, or ball-like shape for rolling on flat terrain. They can also assume fixed forms such as a wall or a building. They consist of mechanical parts, but also may have distributed electronics such as sensors, processors, memory, actuators and power supplies. The major difference from normal (stand-alone) robots is that each individual has multiple connectors allowing for a variety of ways to connect. The modules may have the ability to automatically connect and disconnect themselves to and from each other. Thanks to these parts, robots can form a variety of objects and perform many tasks that involve moving in and manipulating the environment.

2-3-1-4. Cooperative/Social Robot

Cooperative Robot vs Social Robot : A cooperative robot is an autonomous robot that interacts and communicates with other autonomous physical agents. A social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviours and rules attached to its role. The definition of these two robots ‘Cooperative Robot’ and ‘Social Robot’ are almost same but vary slight on the point of the interaction target. Compared to the cooperative robot that only communicates with other robots, a social robot interacts with humans and embodied agents. In other words, the social robot is able to exhibit competitive behaviour within the framework of a game. This can included minimal or even no interaction as uncooperative behaviour can be considered a social response in certain situations. A robot that only interacts and communicates with other robots would not be considered to be a social robot, but cooperative. The idea of ‘robot’ traditionally excludes characters on screen, ‘talking heads’ for instance, suggesting that both the cooperative and social robot must have a physical embodiment. This may, however, be being bit old school, as recently there are mechanisms which are on the borderline between the physical and digital domains. For instance, there is a mechanism that has a projected head and a mechanical body, but is considered a ’robot’. A final aspect of these mechanisms alongside ‘socializing’ and ‘embodiment’ is ‘autonomy’.

For example, a remote controlled robot cannot be considered to be cooperative/social even though it seems to interact with others. It is merely an extension of another human because it does not make decisions by itself. There is an argument as to whether semi-autonomous machines are cooperative/social robots.

The field of cooperative/social robotics was started in the 1940s-50s by William Grey Walter (1910-1977, American-born British neurophysiologist and robotician). In 1949, W. Grey Walter started building three wheeled, mobile robotic vehicles, calling them ‘turtles’ or ‘Machina Speculatrix’ after their speculative tendency to explore their environment. These vehicles consist of a light sensor, a touch sensor, a propulsion motor, a steering motor, and a two vacuum tube analogue computer. Even with this simple design, Grey demonstrated that his robots exhibited complex behaviours. His robots were unique because, unlike the robotic creations that preceded them, they didn't have a fixed behaviour. The robots had reflexes which, when combined with their environment, caused them to never exactly repeat the same actions twice. This emergent life-like behaviour was an early form of what we now call Artificial Life. It also examined how two machines interact recording it as a long exposure picture. That is the reason this research was about cooperative/ social robot. (fig2-3-1-4, 1,2,3)

Fig 2-3-1-4, 1 : Machina Speculatrix - Elsie

The first generation of these robots were named Elmer and Elsie ( ELectro MEchanical Robots, Light Sensitive. ) The following is a photo of Elsie without her shell. In 1951, his technician, Mr. W.J. 'Bunny' Warren, designed and built six new turtles for Grey Walter; they were of a high professional standard. Three of these turtles were exhibited at the Festival of Britain in 1951; others were demonstrated in public regularly throughout the fifties. (figure refer from )

Fig 2-3-1-4, 2 : Machina Speculatrix - Elsie

This is the original circuit diagram for the 1951 turtles. It is slightly different from the circuit diagram for Elmer and Elsie, but works in the same way. (Copyrighted Burden Neurological Institute, http://www.ias.

2-3-1-4, 3 : The Passway When Interacting Elmer and Elsie

"This photograph is the first of 9 taken at a single session in Grey Walter's house in 1950. Candles were fixed to the turtles' shells, and long exposures were used. The light streaks show the paths of the turtles. These are the best scientific records we have of the way the turtles actually behaved. This photograph shows Elsie approaching a light, and then circling around it at a distance."

"This shows Elmer and Elsie interacting. At first they move towards each other, and engage in the fascinating dance described in "The Living Brain". However when the light in the hutch is switched on, they ignore each other and both head for the hutch. Elsie always worked rather better than Elmer so she gets there first. Note Elmer's shell was fabricated from many separate pieces of material."

(Copyrighted Burden Neurological Institute,

2-3-1-5. Replicative/evolutionary Robot

The development of robotics is not only about control methodologies. Robust performance when faced with uncertainty is one of the biggest challenges (Ref)as most robotic systems are manually designed and based on physics or mathematics. One of the starting points in the face of uncertainty is a learning system (Ref) to improve performance and accuracy in the midst of uncertainty. But even learning systems are only effective in connection to an initial starting point; they don’t have adaptability in the face of unexpected changes in circumstances. Recently (2010s), a new research field has arisen using names such as replicative-robotics or evolutionary robotics.

2-3-1-5, 1 : Creatures evelved (evolved)? for walking

The Proposed creatures in here are asked to somehow move on the floor. The space is simulated as are many of the physical laws. While similar to the real world it is still virtual (figure referring from K .Sims (1994), Evolving Virtual Creatures,)

1) Evolving Virtual Creatures by K Sim in 1994.

In 1994, the digital creatures which were proposed by Karl Sim in his thesis ‘Evolving Virtual creatures ’ (K. Sim,1994) represented some of the earliest work on artificial evolutional computing. The robotic object, even in its virtual form, is revolutionised within the ‘hyperspace’ of his world, which consists of a 3 dimensional virtual space, based on physical laws such as gravity, friction, and air viscosity. The system is governed by a combination of GA (genetic algorithm) and neural networks. He also invented a methodology for describing hierarchical connections between plural objects. He started this work as an animation using automated control of object behaviour, rather than with an interest in robotics. There is a trade off in object control in animation: when you use ‘Kinematic control’ it is difficult to show chaotic or random disorder, while if you use ‘Dynamic Simulation’ it becomes difficult to get the simulated behaviour to match the desirable behaviour.

By giving the creature a system where they evolve themselves, this system can govern both their own morphology and provide control. He used nodes and connections for his genome. There were numerous ‘fitness’ criteria, but in this experiment the distance from initial point to target point was used. The creatures invented unique shapes and behaviour by themselves, this sometimes went beyond the designer’s imagination.

In the last chapter, he described 4 future plans. First by making fitness criteria more complex not only by introducing multiple criteria but by incorporating the idea of efficiency. Second, allow the rendering (skins and material) and morphological features to also become evolutionary and engage in material evolution. Thirdly, competition between plural individuals like in the biological world, not only bringing improvement but adding socializing to the mix. Finally applying it to a real robot. Those points are indeed interesting suggestions. In referring to his thesis, however, it is worth noting the way he interfered with the creatures’ evolution. He did not clearly explain his methodology or the benefits apart from the use of a human ’aesthetic’ decision to pick out different results in the end. Also he mentioned the intelligence of the creatures in the very end of thesis but did actual develop an argument for this.

2-3-1-5, 2 : On one the individual form the ‘Golem Project’

One of the individual and 3d printed body and assembled model. Advancement of rapid prototyping anables (enables?) this project. (

2) Golem Project in 2000

Around 2000’s, two researchers in Brandeis University in US started a self-replicative robot project, called the ‘Golem project’. Later this series of work was compiled as ‘Automatic design and manufacture of robotic life forms’ (2000, H. Lipson and J. Pollack). In these projects, they demonstrate an autonomous replicative robot and its evolution (note: it is not only evolutional computation) with minimum human intervention. Using MEMS (micro-electro mechanic systems) and rapid prototyping, the winning individual in the virtual world was automatically converted to a physical object.

Even before this thesis, the idea of evolutional computing including the idea of self- replacement had been developed.( see previous chapter) But their projects’ uniqueness lay in the fact that they made progress in the area of autonomous design and manufacturing. Traditionally life and body design and generational individual replacement was thought to require a complex set of chemical factories. They emphasized that artificial life or intelligence cannot stay in the virtual world but needs to be revealed in the physical world through feedback or sensing.

2-3-1-5, 3 : Diagram from “Resilient Machines Through Continuous self modeling”

The robot has an internal model which explore the optimised behaviour. After many generations of execution sometimes it is output to a physical robotic-body and test to see whether it works or not . (Bongard.J and Zykov. V and Lipson.H, 2006, Resilient Machines Through Continuous self modeling)

3) Resilient machine:

Cornell University’s research group consists of Bongard.J and Zykov.V and Lipson.H. It uses the term ‘resilient machine’ in a way that is close to the use of ‘adaptability’ in this thesis. In their paper ‘Resilient Machines Through Continuous self modelling’ (Bongard.J and Zykov.V and Lipson.H, 2006), a robot which has self-modelling features can adapt to unexpected situations, especially damage, such as missing a leg in their arthropod type robot. They mentioned the benefits of this methodology that can reduce time, energy costs and risk for autonomously improving systems as against systems without internal self-models.(ref)

They made two particularly useful points for this thesis; one is the idea of ‘self-modelling’ and the second is ‘continuous’ modelling. ‘Self-modelling’ corresponds with digital modelling on the screen in this PA (Programmable Architecture) proposal. Hence the physical robot corresponds to the building in PA. The digital model is equivalent to the system of Architecture (building and system) described earlier. Interestingly, the approach is similar though research field is different. In traditional architecture the ‘model’ is a scaled dummy to show to the client. Whether a hand drawing or CAD drawing it represents a copy of actual building.

However, because of the advancement of computer, computer aided design (CAD) tool, it can become more than a drawing. For example, it can be used as a mathematical model for structural analysis, or a digital model for a computer fluid dynamics (CFD) analysis, or used for building information modeling (BIM). ‘Continuous modeling’ relates to temporal design in PA. In the field of robots it is natural to design a machine that operates within the flow of time, but within Architecture this is quite a unique point. Basically dynamic modelling is not needed for architectural design, because buildings are static. However PA is truly dynamic so the new strategy is necessary to replace the blue-print.

2-3-2. Cybernetics

Cybernetics, which was advocated by American mathematician Norbert Wiener in the late 1940s, was a synthetic academic discipline that dealt with the matter of control and correspondence in a system like an organism or a machine. Wiener regarded the operation of the mind, life, society, language and many other things as a dynamic system of control. Our environment reflects the realities of the cybernetic realm as we deal with some things (variables) that cannot be controlled and with others that are adjustable. The aim of cybernetics is to create the most appropriate environment for us by properly setting the values of the controllable variables on the basis of the values from the past until the present.

The concept of cybernetics greatly influenced the disciplines of Social Science as well as the disciplines of Natural Science, as it was relevant to a large number of academic disciplines. The concept of cybernetics had direct connections with such theories as automation, navigator, telecommunication, computer and automaton. However, as the theory of cybernetics aimed to study the nervous system as a kind of the correspondence system, it was applied to the fields of Physiology and Psychology. In addition to this, a discipline, which was called Bio-Cybernetics and aimed to investigate for instance the information of the living bodies, was invented. It was also applied to Economics, Sociology and the theory of financial planning and developed as operations research and the system theory. It can be argued that cybernetics provides basic foundation of the information science as we know it today.

The new system theories developed in the late twenty-century seek to explain various phenomena that cannot be captured within the framework of cybernetics, which considers systems from the perspective of control. Theories such as Humberto Maturana and Francisco Varela’s Autopoiesis, Magorou Maruyama’s second cybernetics, Hermann Haken’s Synergetics, basically aimed at superseding cybernetics.

These new system theories have a different orientation than cybernetics. Whereas cybernetics basically described a system as an entity that maintains itself toward the goal of control, new theories of system generally tended to illustrate system as the incessant process of deviation and pay attention to the dynamic order that is generated through these deviations. Ilya Prigogine’s 'dissipative structure' is a good example of such new system theory of deviation. It is a theory out of 'thermodynamic equilibrium', which sustains its stability by emitting energies and materials that are absorbed from the surroundings in different manners.

2-3-3. Control System and Control Theory

Control theories describe the methods in engineering and mathematics which aim to control dynamic behaviour. The usual objective of control theory is to control a system. It attempts to adjust the system behaviour through the use of feedback. Navigation, machine design, climate modelling and so on are examples of systems where control theory is applied. In control theory there are four basic functions: Measure, Compare, Compute, and Correct. These four functions are complemented by five elements: the Detector, the Transducer, the Transmitter, the Controller, and the Final Control Element. Block diagrams are often used to explain the flow of the system.

In the early control system, a relatively simple system called an ‘Open-loop Controller’ was used. An Open-loop Controller was also called a non-feedback system. As a result, the controller could not compensate for changes. For instance in a car using cruise control a change in the slope of the road could not be accounted for. With the development of the ‘Closed-loop controller sensors monitored the system output and fed back the data to maintain the desired system output. Feedback was able to dynamically compensate for the difference between actual data and desired data. It is from this feedback that the paradigm of the control loop arises: the control affects the system output, which in turn is measured and looped back to alter the control.

The system is often called the ‘plant’ and its output follows a control signal called the ‘reference’ (in this thesis it is called the ‘objective function’), which may be a fixed or changing value. A ‘Controller’ is another function which monitors the output and compares it with the reference. The ‘Error Signal’, which is the difference between actual (sensing data) and desired output (reference/objective function), is applied as feedback to the input of the system, to bring the actual output closer to the reference.

There are a number of key concepts referenced in control systems. These include ‘Stability’ which considers whether the output will converge to the reference value or oscillate (this will be explained later chapter). The ‘Transfer Function’, also known as the ‘system function’ or ‘network function,’ is a mathematical representation of the relation between the input and output based on the differential equations describing the system. This will be explained in a later chapter too.

Originally, control engineering was all about continuous systems. Development of computer control tools created a requirement for discrete control system engineering because the communications between the computer-based digital controller and the physical system were governed by a computer clock. The equivalent to the Laplace transform in the discrete domain was the z-transform. Today many control systems are computer controlled, consisting of both digital and analog components.

At the design stage either digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or analog components are mapped into the discrete domain and design is carried out there. The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with few digital controllers.

Alongside the development of digital control systems, the design process has progressed from paper-and-ruler based manual design to computer-aided design, and now to computer-automated design (CAutoD), which has been made possible by evolutionary computating. CAutoD can be applied not just to fine tuning a predefined control scheme, but also to control structure optimization, to system identification and to the invention of novel control systems based purely upon a performance requirement, independent of any specific control scheme.

All these developments have led to the development of a field called 'Control Engineering' which seeks stable behaviour in various related systems with a particular focus on the cybernetic aspect.

2-3-3-1. Feedback Control

Fig 2-3-3-1,1 Feedback system, drawn by Autehr reffreing to Brews Ohare

Maintaining a desired system performance despite disturbance using negative feedback to reduce system error.

Feedback is a manipulation that turns an output (result) of a system back to being an input (cause). It is a basic principle which defines the behavior of a system in the field of Electronic engineering. Feedback systems aim to manipulate the behavior of a dynamic system, which is a mathematical conception where a fixed rule describes the shift in certain conditions overtime as inputs are applied. Control theories, such as cybernetics, explain how this behavior is modified by feedback.

Feedback, generally speaking, is the phenomenon whereby the results of a system’s reactions influence the system itself. There are two kinds of feedback: one is negative feedback that functions in an inhibitory manner and the other is positive feedback that functions in a promotional way. Feedback works on the principle of self-control. It is an integral part of living organization that sustains homeostasis.

In contrast to feedback, the system that removes noise effects in advance by predicting them and taking an appropriate steps to negate them is called a feed forward control system. Feed forward control systems can be argued to be more effective than feedback systems in that feedback systems cannot make modification operations before the noise effects appear. However, feed forward control generally must be used together with feedback systems. That is, the feed forward control system is used to remove the noise effects that can be predicted and the feedback system is used to take care of the rest of the noise effects.

2-3-3-2. Controller (P, PI, PID controller)

Fig 2-3-3-2,1 PID Controller (P, PI, PID controller), drawn by Author refering to TravTigerEE

The PID Controller (Proportional-Integral-Derivative Controller), one of the most used feedback controllers in the classical control theory, controls by using: the proportional, the integral and the derivative values, denoted P, I, and D. The Ziegler-Nichols method is one of the most famous methods of control. Whereas feedback control systems require relatively high power through an actuator, feedback-measuring systems draw fairly low power devices as output devices are low power (for example, indicators and inverse transducers). Feedback in the measuring system improves accuracy in measurement, improves speed of measurement, allows remote indication and allows noncontact measurement. However, it increases the complexity of design and operation, as well as size and cost.

A proportional-integral-derivative controller (PID controller) is a control loop feedback mechanism widely used in industrial control systems. A PID controller calculates an error value as the difference between a desired set point and a measured process variable. The controller attempts to minimize the error by regulating the process manipulating the variable. The PID controller algorithm contains three separate constant parameters–the proportional (P), the integral (I) and the derivative (D) values. Some applications may require using only one or two parameters (or control actions) to provide the appropriate system control. This is achieved by setting the other parameters to zero. A PID controller will be called a PI, PD, P or I controller in the absence of the respective control actions. P depends on present errors, I on the accumulation of past errors, and D is a prediction of future errors, based on current change trends. The weighted sum of these three actions is used to adjust the process via a control element such as the position of a control valve and a damper. However the use of the PID algorithm for control does not guarantee optimal control of the system or stabilization of the system.

PID controllers originated in 1890s governor design and were subsequently developed for use in automatic ship steering. The theoretical analysis of a PID controller was first published by Russian American engineer Nicolas Minorsky, (Minorsky 1922). Minorsky was designing automatic steering systems for the US Navy, and from his analysis on observations of a helmsman, he noted the helmsman controlled the ship based not only on current error, but also on past error as well as the current rate of change. Minorsky reduced this to a series of equations. His goal was stability, not general control, which simplified the problem significantly. While proportional control provides stability against small disturbances, the integral term was added to deal with steady disturbances, in particular a stiff gale. The derivative term was added to improve control.

Limitations of PID control

While PID controllers are useful to many control problems and often perform satisfactorily they do not provide optimal control in general. The fundamental difficulty with PID control is that it is a feedback system and thus overall performance is reactive and compromise-based. It lacks direct knowledge of the process. While PID control is the best controller where a model of the process doesn’t exist, better performance can be achieved by directly modelling the actor of the process without resorting to an observer.

PID controllers can fail to work properly when the PID loop gains are insufficient causing the control system to oscillate around the control set point value. The non-linearities of a process are also a difficulty. The system may not react to changing process behaviour as well as experiencing a lag in responding to large disturbances.

The most significant improvement to the difficulties described above is to incorporate a feed-forward control based on knowledge about the system, and use the PID only to deal with error. Alternatively, PIDs can be altered in some minor ways, such as cascading several PID controllers, changing the parameters (for example adaptively modifying them based on performance) and improving measurement (higher sampling frequency or precise, accurate and low-pass filtering).

2-3-3-3. Sensing / Mesurement and Noise

Overview: A sensor is a device that identifies events or changes in quantities and returns a corresponding output, generally in a format of an optical or electrical signal. For example, a thermocouple outputs voltage in response to temperature changes. A mercury thermometer is also a sensor which converts the measured temperature into the expansion and contraction of a liquid which can be read on a calibrated glass tube. Sensors are used in everyday items such as touch-sensitive lift buttons (tactile sensor) and desk lamps which can dim or brighten by touching the base, along with numerous other applications which most people are never fully aware of. With developments in precision machinery and easily-handled microcontroller platforms, new types of sensors are used extensively in various fields such as MARG sensors (Bennett, S. 1993). While analogue sensors such as potentiometers and force-sensitive resistor are still widely used. Applications for such sensors include manufacturing and machinery, airplanes and aerospace, cars, medicine and robotics. All living organisms have biological sensors with functions similar to those of the mechanical devices described.

A sensor's sensitivity relates to how much the sensor's output changes following the change of inputs. For example, if the mercury in a thermometer moves 1 cm when the temperature changes by 1 °C, the sensitivity is defined as 1 cm/°C (this means the slope Dy/Dx assumes a linear characteristic). Some sensors may have an impact on what they measure. For instance, a room temperature thermometer put into a cup of hot liquid cools down the liquid while the liquid heats up the thermometer. Sensors are designed to have a minimal effect on what is measured. Making the sensor smaller often improves this. Technological progress enables sensors to be manufactured on a smaller scale using MEMS technology. In most cases, micro sensors achieve significantly higher speed and sensitivity than macroscopic sensors.

In terms of measurement errors and noise, there are several kinds. One kind of error relates to the ‘Resolution’ of the sensor. The resolution of a sensor is the smallest change the sensor can detect in whatever it is measuring. The resolution is related to the precision with which the measurement is conducted.

Another kind of error is ‘Noise’. In electronics, a random fluctuation in an electrical signal that varies in time is called noise and noise exists in any electronic circuits. Noise is a summation of undesirable or disturbing energy, regardless of whether it is natural or man-made. Although it is generally unwanted as it causes an error or undesired random disturbance in information signals, it could be utilized purposefully in some application, such as in generating random numbers or dithering. To dither noise is intentionally applied to randomize quantization error.

A final kind of error is ‘Deviation’. Several types of deviation can be observed if the sensor is not appropriate.

• The sensitivity of the selected sensor may be different from the value expected. This is called a sensitivity error. The sensor may also be sensitive to properties other than the property intended to be measured. If the sensitivity is not constant over the range of the sensor, this is called non linearity. The amount the output differs from ideal behaviour over the full range of the sensor is often noted as a percentage of the full range.

•property’s value exceeds the limits of sensor’s output range. The full scale range defines the maximum and minimum values of the measured property. If the sensor has a digital output, the output is only an approximation of the measured property. The output signal will eventually reach a minimum or maximum when the measured

• If the signal is monitored digitally, limitations of the examining frequency can cause an error. If the output signal is not zero when the measured property is zero, the sensor has an offset or bias.

• If the output signal slowly changes independent of the measured property, this indicates a slow deterioration of sensor properties over a long period of time.

These deviations can be classified as either systematic errors or random errors. Appropriate calibration strategies can compensate for systematic errors. Signal processing, such as filtering can reduce random error caused by noise.

2-3-3-4. Actuation

An actuator is a type of motor that plays a role in moving or controlling a mechanism within a system. It is powered by an energy source such as an electric current, hydraulic fluid pressure, or pneumatic pressure, and converts that energy into motion. The actuator can be controlled in a simple manner (a predetermined mechanical or electrical device), it can be software-based (e.g. a printer driver, robot control system) or it can have a human or any other input controlling it. In terms of the application of actuators, in engineering, actuators are frequently used as mechanisms to introduce motion, or to clamp an object so as to prevent motion. In electronic engineering, actuators are a subdivision of transducers which are devices to transform an electrical signal into motion. In virtual instrumentation, actuators and sensors are the hardware complements of virtual instruments.

Motors are mostly used when circular motions are required while some actuators are intrinsically linear, such as piezoelectric actuators. However motors can also provide linear forces by transforming a circular motion to a linear one with a screw or a similar mechanism. Conversion between circular and linear motion is commonly made via a few simple types of mechanism such as a screw or a wheel and axle. Screw: The screw jack, the ball screw and the roller screw actuator all operate on the principle of the simple machine known as the screw. By rotating the actuator's nut, the screw shaft moves in a line. By moving the screw shaft, the nut rotates. Wheel and axle: the hoist, the winch, the rack and pinion, the chain drive, the belt drive, the rigid chain and the rigid belt actuator operate on the principle of the wheel and axle. By rotating a wheel/axle (e.g. drum, gear, pulley or shaft) a linear member (e.g. cable, rack, chain or belt) moves. By moving the linear member, the wheel/axle rotates.

Examples of other actuators include: the comb drive, the digital micromirror device, the electric motor, the electro active polymer, the hydraulic piston, the piezoelectric actuator, the pneumatic actuator, the relay, the servomechanism and the thermal bimorph. Here is a brief explanation of the mechanisms of various actuators.

The ‘Hydraulic actuator’: A hydraulic actuator consists of a cylinder or fluid motor that uses hydraulic power to provide mechanical operation which gives an output as linear, rotary or oscillatory motion. As liquid is almost incompressible, a hydraulic actuator produces considerable force, but its acceleration and speed is limited. The hydraulic cylinder consists of a hollow cylindrical tube along which a piston that can slide. When pressure is applied on each side of the piston, it is called ‘double acting’. A difference in the pressure between the two side of the piston results in the piston moving to one side or the other. The term ‘single acting’ is used when the fluid pressure is applied to just one side of the piston and the piston can move in only one direction. In this case a spring is frequently used to give the piston a return stroke.

The ‘Pneumatic actuator’: A pneumatic actuator converts energy from vacuum or compressed air at high pressure into either linear or rotary motion. A pneumatic actuator is useful for main engine controls because of its quick response in starting and stopping as the power source doesn’t need to be stored in reserve to operate. Pneumatic actuators can produce large forces from relatively small pressure changes. These forces are often used with valves to move diaphragms and control the flow of liquid through the valve.

The ‘Electric actuator’: It is one of the cleanest and most readily available kinds of actuators because it does not use oil. Electrical energy is used to actuate equipment such as multi-turn valves through an electric motor which converts electrical energy to mechanical torque.

The ‘Thermal or magnetic actuator’ (shape memory alloys): These actuators use shape memory materials (SMMs), such as shape memory alloys (SMAs) or magnetic shape-memory alloys (MSMAs) which can be actuated by applying thermal or magnetic energy. The actuators tend to be compact, lightweight, economical and provide a high amount of power per unit volume.

The ’Mechanical actuator’: A mechanical actuator functions by converting rotary motion into linear motion to execute movement. It involves gears, rails, pulleys, chains and other devices to operate such as rack and pinion.

2-3-3-5. Stability and Catastrophic Collapse

The field of ‘Stability Theory’ explores the stability of systems. Especially for the solutions of differential equations describing dynamic systems, various types of stability have been identified. ‘Nominal Stability’ is the stability of a closed loop system under the condition of a model is perfect. Be contrast, ‘Robust Stability’ allows for uncertainty in a model. In the situation of plant instability, the amount of data is highly problematic. In order to resolve the issue, 'Calculation Stability' is an effective method. In this method, a sort of re-parameterization is frequently used. For stability in linear systems, ‘Exponential Stability’ is widely used. In linear systems, there are two type of stability. One is ’Internal Stability’, the other is ‘Bounded-input bounded-output Stability’ are present (BIBO Stability). The former deals with whether the system will output stable values when no inputs. The latter focuses on the system outputs with in a certain range of values (called bounded) when there are bounded inputs. In contrast, stability for nonlinear systems that have an input present is called 'Input-to-state Stability'. It amalgamates Lyapunov stability and a system similar to bounded-output stability. In terms of asymptotic stability in nonlinear systems, the most common type would be based on the theory of Lyapunov. (Lyapunov, A. M. .1992). ‘Lyapunov Stability’ concerns the stability of solutions near to a point of equilibrium. In simple terms, “if all solutions of the dynamical system that start out near an equilibrium point Xe stay near Xe forever, then x_e is Lyapunov stable”.

Below concrete descriptions, and explanations of the following system state aspects are given:‘Exponential Growth’,’ Generic Structure’, ‘Exponential Decay’, ‘Goal Seeking behavior’, ‘Oscillation’, ‘S-shaped’, and ‘Catastrophic collapse’.

- Exponential Growth

In the paper, (Radzicki .M.J 1997) the author took ‘a herd of elephants’ as an example.

Fig 2-3-3-5,1 The Diagrams of Exponential Growth

In the above model, which doesn’t have a feedback system, the number of the elephants simply increases. Additionally, the number of the elephants also increases even though the original number of the elephants starts at zero. The feedback regarding the number of the newborn children is added in the following diagram.

Fig 2-3-3-5,2 The Diagrams of Exponential Growth Two

The result is that if the number of the elephants begins with zero, the size of the herd of elephant remains zero. In this example, if the original number is set at ten, the result shows the exponential growth.

- Generic Structure

Over the years, system dynamicists have identified combinations of stocks, flows and feedback loops that seem to explain the dynamic behavior of many systems. These frequently occurring stock-flow-feedback loop combinations are often referred to as ‘generic structures.’ The typical model of the generic system is as follows.

Fig 2-3-3-5,3 The Diagrams of Generic Structure

The above diagram is an example of a loop system that the paper illustrated. The system can explain a large number of phenomena in the world. The following diagram, for example, shows how knowledge will be accumulated in a discipline of natural science.

Fig 2-3-3-5,4 The Diagrams of Generic Structure 2

- Exponential Decay

The below diagram illustrates the exponential decay. If the target value (goal) is set to zero in a goal-seeking behavior system, the result will be exponential decay. The following graph gives a typical example of this. Figures below is a system dynamics representation of a linear first order negative feedback loop system with an implicit goal of zero.

Fig 2-3-3-5,5 The Diagrams of Exponential Decay

- Goal-seeking Behavior

In Chapter 3 Rdolzicki discusses two types of feedback loops : positive loops and negative loops. Positive loops generate exponential growth (or rapid increase) and negative loops produce goal-seeking behavior. As the below diagram shows, the goal-seeking behavior system always creates feedback which is the discrepancy between the goal and the stock.

Fig 2-3-3-5,6 The Diagrams of Goal-seeking Behavior

- Oscillation

Oscillation occurs due to the delay of the information in a feedback system. The ‘delay’ in the measuring part (sensing for example) causes a delay in transmitting the information. The following model, therefore, includes the ‘desired system level’, which controls the delay of the information. The blue line is the system. The system thus decreases the degree of instability in the oscillation system. There are four types of oscillation: ‘Sustained Oscillation’ ,‘Damped oscillation’ , ’Exploded oscillation’ and ’Chaos oscillation’.

Fig 2-3-3-5,7: The Diagrams of Oscillation

- S-shaped Growth

S-shaped growth is the characteristic behavior of a system in which a positive and negative feedback structure fight for dominance but result in long-run equilibrium. In the paper, as an example of the s-shaped growth, Radzicki describes the relationship between elephants birth rate and their death rate.

Fig 2-3-3-5,8: The Diagrams of S-shaped Growth

- Catastrophic Failure

In relation to stability, ‘Catastrophic Failure’ means a sudden, general failure that recovery is impossible. It often leads to cascading systems failure. A cascading failure is a failure in a portion of a system made up of interconnected portions in which the failure of a portion can cause the failure of successive parts. Structural failure is the most common example of this. However, the term has often been extended to a lot of other disciplines where comprehensive and irrecoverable loss happens. These failures are explored by using the methods of forensic engineering, which tries to determine the cause or causes of failure.

2-3-4. Deterministic vs Stochastic in prediction and forecasting

Deterministic algorithms and stochastic algorithms are both versions of combinatorial optimization algorithms, a kind of heuristic algorithm which seeks an optimum solution without examining all possible solutions. Deterministic algorithms search for solutions by using a definitive selection. For example, searching in limited, specific areas is a deterministic algorithm. By contrast stochastic algorithms randomly make decisions while searching for a solution. Deterministic algorithms will, therefore, generate the same solution to a given issue repeatedly.

By contrast, probabilistic or stochastic algorithms may not generate the same solution each time. Heuristic algorithms are classified into repetitive algorithms and constructive algorithms. Typically constitutive heuristic algorithms start searching with a single element (although multiple elements are possible as a start). While searching for a complete solution other elements are continually selected and added creating a partial solution for an increasingly larger set of elements. Once a selected element is added, it is not removed from the partial solution at a later stage. Constitutive algorithms are successively augmenting themselves.

A repeatable heuristic method such as a search in limited areas requires two data inputs. The first is a description of the problem to be solved using examples and the second is an initial solution for the problem. Repeating heuristic methods change the solutions which are initially given in order to improve its evaluative function. When its evaluation level is not improved, the algorithm returns "No" and keeps the existing solution. If it is improved, the algorithm returns the improved solution, and repeats the evaluative steps using the new solution. Normally this process is repeatedly carried out until the evaluation level stops improving. Frequently this algorithm is applied in conjunction with a constitutive heuristic method to improve the generated solution.

In order to design a superior constitutive algorithm, a deep understanding and analysis of the problem to be solved is required along with the development of an appropriate constitutive heuristic method. In many cases, it is not easy to use heuristic techniques for real issues.

2-3-5. Optimization

2-3-5-1. Objective functions (purpose)

Consider the mathematical programming question, where f:A→R. The goal is to find X0 that satisfies the following equation,

x0 A : x A, f(x0)f(x) (Maximizing Problem)

In this example, “f” is called the objective function to be minimized over variable “x” and condition "A".

Generally in optimization design, designed objects are decided by defining constraint conditions. In the simulation process of a design, objective functions are used as a method to determine constraints and to determine the minimal solution, if it is in a linear programming question.

In the Traveling Salesman Problem(TSP), for example, the objective function is

described as "Dij" with the distance between the number of nodes (N) and whether or not to include its arc (the aisles of nodes). Xij= 1 means the arc is included in the circuit while Xij= 0 means the arc is not included. The fitness value of the circuit F is described as,


A reverse process is used for analysis. The levels of objectives are calculated, and variables are found from the plural objectives. Following this, the objectives are used in the process going from design to analysis.

2-3-5-2. GA vs others

When looking for the optimal parameters (design variables) to an objective function whether from calculation results or experimental results using the parameter(s), the various parameters need to be tested to find out which conditions improve results. The search needs to be conducted efficiently, not through randomly analyzing possibilities but by properly using combination optimization methods.

There are various combination optimization methods, such as Simulated Annealing, Genetic

Algorithm, Tabu Search, Simulated Evolution, and Stochastic Evolution.

Compared to the other methods, GA has a number of advantages.

1. GA can adapt to the evaluation function without referring to the differentiation possibilities of the objective function.

2. It can directly operate by devising solution expressions.

3. A solution can be treated as a perfect solution candidate during the early stages of the search, even if it has not been developed through the procedure.

4. It is easy to extend to multiple-purpose questions because GA searches by using multipoint.

GA works effectively on questions where objective function’s calculations have problematic qualities. Other algorithms are necessary to solve questions which have a gradual-sloping solution space with robust characteristics.

2-3-5-3. Simulated Annealing

Simulated Annealing (SA) is one of the most widely methods used in order to solve optimal combination questions. SA is heuristic program with general-purpose adaptability, belonging to a class of a non-deterministic algorithms.

In an SA procedure, “T” is the time period of the calculation and is determined before-hand. The system repeats its code until time is equal to “T”.

The following in brief describes how the coding progresses.

A suitable state (solution) is chosen with a specific score.

If the score of the next state is better than the current score, the next state is selected.

If the score of the next state is worse than the score of the current state, it moves onto next state if Time “t” is close to 0. It does not move on if it is the final stage when “t” is close to “T”.

In the end, the state which is related to the best score is chosen.

Example: If it is a question of calculating the [maximum of y=f(x)],

The state is “x”

The following state is [x' = x+a (where ‘a’ is an incrementally small value) ]

The score is “y”

This would be continued for a period of time “T”.

2-3-5-4. Dantzig’s Simplex Method

The Simplex method is used when maximizing or minimizing a purpose function in linear programming. The purpose function seeks the intersection point of the limiting conditions. Therefore, on all intersection points generated by a limiting condition, a solution can be obtained if each intersection the objective function is investigated.

The method which was made to efficiently investigate an intersection is called the

“Simplex Method”. Its computational procedure is:

1. Find a base feasible solution by some method.

2. Investigate whether the obtained solution is the optimal solution.

3. If a solution is optimal, the procedure ends. Otherwise, use the new, improved solution and return to step 2.

2-3-5-5. Stochastic Diffusion Search/Ant Algorithms

Both “Stochastic diffusion search"(SDS) and "ant algorithms" are part of the “group intelligence" algorithms used in artificial intelligence systems. They are based on the study of the collective behaviour of a self-organising system.

SDS is a probabilistic wide area search based on agents as well as being an optimization method. The method is suitable for problems which can be decomposed. Each agent repeats valuating hypotheses by randomly choosing a partial purpose function set by a parameter linked to the current hypotheses. In the standard SDS, the evaluation result of the partial function has only 2 states, each agent being either activated or deactivated.

「The information about the hypothesis is spread to a group of individuals by using communication between agents. In contrast “Stigmergy'', an SDS used to model the optimisation of ant colonies, learnt from ants' queuing behaviour. Hypotheses are transmitted via one to one communication between agents. Through a regular feedback system, groups of individual agents are gradually established around a wide area. SDS is an effective, mathematically describable search algorithm.

Ant colony optimization (ACO) is a meta-heuristic optimized algorithm which is used to search for approximate solutions to difficult combination optimization problems. ACO attempts to find a solution by an artificial ant that imitates a real ant, moving over graphs of issues. Artificial ants help other artificial ants to search for a better solution by leaving artificial pheromones on a graph. ACO has shown its usefulness in solving a large number of optimization problems.

2-3-6. Evolutionary Computing

'Evolutionary Computing' is a method to optimize, search, and learn from the analogy of the evolution of creatures in nature. Creatures on the earth have taken billion of years to gradually evolve. Applying that evolutionary methodology to engineering is the fundamental point of ‘Evolutionary Computing'. The calculation methodology of 'Evolutionary Computing' follows the steps below.

First of all, as a problem is identified, possible solutions are presented as virtual creatures (or sets of genetic factors). The manner of expressing this has several variations. Then, groups of individuals are constructed by generating various selections of possible solutions (virtual creatures based on various sets of genetic factors).

Then the evaluating function assesses each individual in the group. These assess the level of suitability of the individual in addressing the identified problem. Based on the assesses the dominant individuals are kept in the group and duplicated; the recessive individuals are deleted. Next, a genetic cross-over or mutative methodology similar to gene recombination is applied to individuals in the groups. This leads to a new repetition of the steps of valuation, selection, and gene recombination. This is the basic flow of 'Evolutionary Computing'.

Evolutionary computing includes several variations:

- Genetic Algorithm; GA

- Genetic Programming; GP

- Evolution Strategy; ES

- Evolutionary Programming; EP

- Classifier System; CS

Although those are commonly based on the evolution of creatures, there are various differences in the method of the modeling, or in the processing procedures.

2-4. From Biology and Biomimetics (Cooperative species)

2-4-1. The Sociable Weaver (Social Birds)

Fig 2-4-1,1 'Social weaver nest mass' 

Referencing a picture from "LIFE-HISTORY EVOLUTION AND COOPERATIVE BREEDING IN THE SOCIABLE WEAVER " by RITA COVAS MONTEIRO , page 4. “The photo shows part of colony 2, the largest colony in the study area that during the study period held 150-200 birds. The open savannah habitat characteristic of the area can be seen in the background.” (Covas,R 2002)

The sociable weaver is a species of birds living in the arid region of southern Africa around northern Namibia, which has a strong relation with the Kalahari vegetation in the area. It is their nests rather than birds themselves which is conspicuous.

Their life pattern is unique and it has frequently been the target of research. They typically live in the same nest as a group, while most other bird species live independently in separate nests. Interestingly, several different subspecies of birds use the same nest simultaneously. Moreover other species use them for roosting. Some larger birds, such as Giant Eagles, use the nest as a platform to make their own nest. (Mendelsohn,JM 1997)

The social weaver tends to make larger nests in order to be able to live collectively, compared to other species'. In a huge single nest made by stiff grasses, between two and five hundred sociable weavers live together. Their nests are built on a higher tree, some of them can be seen in the upper parts of power poles. Making the nest larger is excellent in terms of insulation, keeping warmth and controlling the inside condition. As it is used for a long time, the weavers repeatedly repair and develop of the nest.

The unique point of this biome is its combination of materials and creatures, much like a city. Each bird’s part of the nest is more than just a nest but an extension of their society, and not only their society but also that of other species'.

2-4-2. The Termites (Insects' Architecture)

Fig 2-4-2,1 Termite Mounds in Namibia

Referencing a picture from

There are also some insects which lead community life such as termites, ants and bees. (Both termites and ants belong to sort of Blattodea.) For example, Termites are characteristic in biomimetic, especially be high lightened as their nest called 'mounds'. Termites inhabit on majority of lands, from the tropics to the subarctic zone, but most of them prefer to live the tropics. Recent research shows this animal architecture is highly designed.

Against common view, which the insect has low intelligence, they do not simply result from the repetition of local patterns, but present a coherent global organization (Tuner 2000). Moreover, the internal spatial network is excellently compromise between efficient internal connectivity and defense against attacking predators. (Perbna,A etal.2008). More generally, the nest architecture contributes to maintain homeostasis of the local environment (Doan, 2007) (explained below).

Fig 2-4-2, 2 Section of Termites' Mound

The tunnel is split into branches at near ground level reaching each chamber. It plays the role of the ventilation system of the colony. Also, there is another termite which open vents just above the ground level of the mound.

About Homeostasis. Most of the termites (but in here the example raised in Zimbabwe )(Doan, 2007) farm ‘fungus’ as their primary food source. In Zimbabwe, the temperatures outside range from 35 to 104 degrees F during the day, but this fungus farm needs to be kept at exactly 87 degrees F. Termite and their mound has a remarkable system.

They made vents in the end of the tunnel to control the inside's temperature and humidity, hence air is sucked from the lower entrance of the tunnel, and goes through to the top. In order to work this ventilation system, termites keep digging new vents and plugging up old ones.

Fig2-4-2,2 Biomimetic Architecture, ‘The Eastgate Centre’

This concrete structure ‘The Eastgate Centre’ has a termite mound mimicking the ventilation system. Because of the fans on the first floor, the clean=cool outside air sucked from bottom of the building continuously. It is then pushed up vertical supply sections of ducts that are located in the central spine of each of the two buildings.

Then it vented under the floor and was used as air. The fresh air replaces stale air that rises and exits through exhaust ports in the ceilings of each floor. Then, it enters the exhaust section of the vertical ducts before it is flushed out of the building through chimneys. Finally flowing up via central chimneys on each volume. The whole complex consists of two volumes and void in the middle. Those work the same way, as nested boxes. (

Recently in 1996, its excellent ventilation system has been applied to real buildings, the Eastgate Centre in central Harare, Zimbabwe by architect Mick Pearce in collaboration with Arup.(Doan, 2007) Inspired by Zimbabwean masonry and the self-cooling mounds of African termites, this building has no conventional air-conditioning or heating system. Yet stays regulated year round with dramatically less energy consumption using design methods. This building uses less than 10% of the energy of a conventional building its size. These efficiencies translate to initial cost. Because of the improved air-conditioning system, it is said, the owners of this building have saved $3.5 million alone, also affecting the tenants’ rents, those are 20% lower than surrounded buildings. This effect is not only financial efficiency but also good for the environment.

2-4-3. Dictyostelium (Social Amoeba)

Dictyostelium discoideum is one of the cellular slime molds, is known as social ameba in various meanings. Slime mold is divided into two: ‘myxomycete’ and ‘cellular slime mold’, those look similar but different. The color of those creatures is various such as yellow, white, and blue. It exists close to our daily life area such as forest, but it may miss because it is small. Recent work (A,Brock 2011) has shown that microorganisms are surprisingly like animals in having sophisticated behaviours such as cooperation, communication and recognition as well as many kinds of symbiosis. This creature is unique in several points.

Fig2-4-3,1 : Dictyostelium Aggregation by Bruno in Columbus 2008

D. discoideum exhibiting chemotaxis through aggregation

First point is they behave sometimes as plants, but sometimes as animals, but also have character like fungi. For example, in some phases of life, they make fruit called ‘basidiocarp’. On the other hand, John Tyler Bonner (Bonnew,J,T 1965) has found that Dictyostelium discoideum walks and changes itself according to the environmental condition such as light. This is a character of animal. This ambiguous bionomics has been argued but recent study show, slime mold is slime mold, which means individual species . The order is the Dictyosteliida (Dictyostelid cellular slime molds or social amoebae.)

Second point is Dictyosteliida contains organisms that hover on the borderline between unicellularity and multicellularity. Its sociability is also unique to adjust the condition by dividing itself as an independent cell, is known as the foremost among the researchers. When there are feed (bacteria or a yeast fungus) in surround, it becomes ameba and eat and do cell division, those behavioral is like unicellular-animal. On the other hand, once they lost feed, one million individual start gathering and make mass. From various research this mass is more than just aggregate but has the feature of multicellular-animal. This mass moves to find better environment.

Fig.2-4-3,2: Different life phase of Dictyosteliida

Dictyostelium discoideum's life cycle. 1: an elliptical spore. 2:Ameba germinated from spore, propagates with eating.3: Ameba cells gather when they consume food. 4: Slug-like mass crawl around.5: make basidiocarp when they find the appropriate place. (Referring to Kawada,T 2006)

Thirds point is they do primitive farming, called husbandry. The social amoeba Dictyostelium discoideum has a primitive farming symbiosis that includes dispersal and prudent harvesting of the crop. (A Brock 2011) About one-third of wild-collected clones engage in husbandry of bacteria. They do not consume all bacteria, but incorporate bacteria into their fruiting bodies. Then they carry bacteria during spore dispersal and can seed a new food crop. When the case of edible bacteria is lacking at the new site, this could be an advantage.

The striking convergent evolution between bacterial husbandry in social amoebas and fungus farming in social insects makes sense. Because multigenerational benefits of farming go to already established kin groups, according to A Brock.

2-5. From Psychology

2-5-1 . Valentino Braitenberg and His Suggest for Temporal Design Method

One of the pioneers bridging between neuroscience and physical computation, the Italian neuroscientist, Valentino Braitenberg, developed the novel field called ‘Embedded cognitive science’. His academic approach is radically reductionist, assuming that every neural action can be translated into a physical computation, similar to ‘wiring’ of components. In his book ‘Vehicles’ (Braitenberg, 1984), to concretely describe the process he made 14 system models (V1-V14) from simple to complex using toy vehicles, and from them he tried to comprehend neural actions and intelligence, through a number of experiments. Of course, a real brain has more complex connections between each nerve cell, but his study models are no more than a mathematical hypothesis. Such an approach (Braintenberg’s approach) involving ‘Hypothetical–modeling, Experiment, Observation, Modifying model’ is called ‘abduction’ in the complex system science field as it belongs neither to deductive nor to inductive reasoning styles.

Fig 2-5-1,1: Sensory-Motor Coupling (Drawn by K.Hotta)

A diagram shows Sensory-Motoring-Cupping (SMC) where the sensor’s input enters the object, then sends the signal for movement out from its right-side. In this experiment this SMC is connected to sensors and motors. In this instance, ‘Signal for movement’ means specifically, ‘go straight’ or ‘turn right’. For example, a robotic vehicle proceeds in a corridor, and it is now able to see round a corner. In the case it turns right, it will reach a big square, or if it turns left, the corridor goes on straight. When this robot vehicle reaches the corner, it will perceive some discontinuity in the sensor value (for the moment let’s say input is a visual device such as web-cam).

When this happens the sensor unit will send a specific signal pattern, which contains the immediate environment from the corridor towards the square, and that signal will be used to choose a different movement pattern. This decision function is provided by SMC. In other words SMC’s design must provide this connection between sensor information (input) and movement (output) on the system. The minimum requirements for the SMC system are these two elements.

Braintenberg’s motivation for these experiments is quite radical reductionism. The phenomenon,which looks ostensibly complex, is no more than a combination of electrical circuits. His philosophy is more closely related to the idea of artificial life. This interpretation of Intelligence is simpler rather than a mysterious machine.

In his book, T. Ikegami repeatedly mentioned (Ikegami, 2007) the importance of V7 and V11. He insisted that the E-line and the M-line could be candidates for a ‘middle layer in cognition’. It is interesting that spatial-correlation and temporal (time)-correlation are important for cognitive intelligence. It also implies a deep suggestion that ‘cause and effect’ are not clearly, objectively identifiable there, but depend on the vehicle’s subjective sensibility. (- this may have a connection with the argument of the Chinese room in a Turing machine.) From this argument, SMC itself could be said to be an artificial architecture which is able to cut and connect spatial-correlation.

Fig 2-5-1,2: One of the example of these experiment, V6 (Drawn by K.Hotta)

Surprisingly, the V6 vehicles use an evolutionary process to be smarter. Because of its circuit design, this vehicle can gain certain sort of intelligent behavior over a generation.

Actually, there are several researches, who have done work similar to V6 (fig2-5-2). For example, Floreano and his colleagues (Nolfi. S and Floreano. D, 2000) designed a vehicle which avoids colliding with walls in a maze, and then they developed it into 3-dimentional space with a small airship. Nolfi too (Tani and Nolfi, 1999) designed a vehicle which can find an exit point or find the size of the object in Euclidean space.

Though most of these machines do not have memory components, they can, surprisingly, solve intelligent-like tasks. Lipson and colleagues (Lipson and Pollack, 2000) developed a new type of vehicle and rapid prototyping. In this case, the creature-like machine evolves to an appropriate shape and system by using GA and a neural network. Rather than a vehicle, it should be called an artificial creature because other structures rather than wheels are used. (This project was done in 2000, Dr. H. Lipson further developed the model later on.)

However there is an unsolved issue – how to develop V14’s randomness. Vehicles V1 to V13 were designed to be ‘smart’ meaning they had the ability to solve a problem. V14 was about smartness but also about the randomness of the fluctuation of life. Most traditional Artificial Intelligence (AI) research focuses on problem-solving with a specific purpose or on accomplishing certain tasks such as beating a person at chess. In a general sense, life-like behaviour is not functional and sometimes it doesn’t have clear purpose but may just be play or capriciousness. That’s why V14 is added. But it is difficult to develop these fluctuating functions using problem-solving methods or GA optimisation. No one knows how to do it.

T. Ikegami also points out in his book “Motion makes Life '' P74(Ikegami, 2007), that in Braitenberg’s experiments the “occurrence of time ” is dealt with from the point of view of motion. For motion to occur, there needs to be focused on time, because it is a dynamic phenomenon. In almost all the chapters of Vehicles, the model processes its behaviour through time and Euclidean space, creating a system for behaviour. In this (Programmable Architecture) thesis, there are hints and implications for temporal design method (which means designing with the time process as a dynamic system, not as a static image).

2-6. From Art

2-6-1. Strandbeest by Theo Jansen

Fig2-6-1,1 Strandbeest by Theo Jansen (Refer from his website (

Theo Jansen has been occupied with the making of a new nature since the late 1970s. Plastic yellow tubes are used as the basic material of this new nature. He makes skeletons which are able to walk on the wind. Eventually he wants to put these animals out in herds on the beaches, so they will live their own lives. This art work includes mechanisms such as the Klann Linkage, Jansen's Linkage etc.

‘Strandbeest’ is known as a kinetic sculpture which moves by wind power. Stranbeest was made by Dutch physicist and artist Theo Jansen in 1990. Jansen has been studying the creation of new life in the field of Art. This name ‘Strandbeest’ means ‘beach creature’ in Dutch. The intricate structure is made by assembling pipes, wood, and wing-like sails. He says in his website that the first creatures were just a rudimentary breed but they gradually evolved into a generation of machines that are able to react to the surrounding environment.

Usually Strandbeest is shown on the beach in the sand. Its foot is designed to walk on the sand rather than using something such as a wheel. Eventually he expects to put herds of these machines out on the beaches, in order to, in his words, ‘live their own lives’.

As a back story of this work, Jansen was born in the Netherlands in 1948. In the early part of his career, he studied physics at the Delft University of Technology. Theo Jansen presented a number of projects before Strandbeest from 1979 onwards. His subjects were to be composed of one ingredient and to create new life. While he was at Delft University of Technology he put forward some projects as a complex of physics and art. In 1979 Theo Jansen launched ‘The UFO’. ‘The UFO’ was made of polyvinyl chloride. It was 4 meters wide and could fly. From then he started to use polyvinyl chloride for his objects, something he continued to use for all of his work into the present. In 1986 Theo Jansen presented ‘Bird’. This art shows a human hung from a ceiling who flies around a circular room with plastic wings.

There is a unique philosophy in Strandbeest. As human beings are made up of proteins, Theo Jansen decided that his new life creatures should also be composed of one ingredient. He adopted polyvinyl chloride (PVC). Although initially he used PVC because of price, it turned out that PVC had the ability to change form. Thus the PVC tubes could become the "Cells" of Strandbeest as he planned. Theo calls the plastic tubes "Cells". He heats up the plastic tubes to create the various kinds of shapes needed for muscles, legs and lungs for example. Another key point of this work is its analog environmental detection. Jansen demonstrates this machine on the beach. Strandbeest is designed to detect surroundings, for example a sensor makes it aware it has entered the sea. If this machine detects water, it will move backward to return to the beach environment. One model will even anchor itself to the earth if it senses a storm approaching.

2-6-2. Petit Mal by Simon Penny

Fig.2-6-2,1 Petit Mal by Simon penny( refer from

The physical structure of Petit Mal is centred around a set of pendulums and is built of a welded aluminium frame and a pair of wheels because the material is lightweight and economical. It is driven by just two motors set on its body, and it is self stabilising. The control facilities such as processor, sensors and logic power supply are stored in the upper pendulum. The lower pendulum contains the motors and motor power supply. The internal pendulum plays the role of keeping the sensors perpendicular while the aluminium body is moving. Batteries set in both frames keep the balance by using their weight and work as power sources.

Three ultrasonic sensors are attached and each of them are paired with a piezoelectric sensor in the front of Petit Mal while another ultrasonic sensor is set on the back side. Two motors with optical encoders for motor feedback and an accelerometer are installed. The main coordinator of the system is a single Motorola 68hc11 microprocessor. Another processor is planned to allow Petit Mal to be able to learn. Petit Mal can function for a few hours until the batteries need to be replaced.

With ‘Petit Mal’ Simon Penny aimed to produce a robot which was truly autonomous. It was first designed in 1989 and started production in 1993. Simon Penny is an Australian artist working in the field of interactive media art as well as an architect working in Digital Cultural Practices and Interactive Art. The autonomy of Petit Mal creates a new domain of autonomous interactive aesthetics. The production required miniaturization and efficiency in its conception as its budget was limited. The project was assisted by Mark Needelman, Kurt Jurgen Schafer, Gabe Brisson and Jamieson Schulte.

’‘Petit Mal’ was an attempt to investigate the aesthetics of a machine and its interactive behaviour in an actual setting. This work was unique in a number of ways. The first key aspect was its use of space. In that era, most digital art work was screen based using a ‘Graphical User Interface’. However, ‘Petit Mal’ recognized space, chasing and responding to people. It behaved physically, something peculiar in an electronic object. In his words, “I am particularly interested in interaction which takes place in the space of the body, in which kinesthetic intelligences, rather than ‘literary-imagistic’ intelligences play a major part.”(From his website,

The second key aspect was its degree of intelligence. Compared to industrial robots recognized as a tool, Petit Mal does not work on tasks optimized and designed beforehand. It reacts based on its investigations of its environment. He cited Brooksian subsumption architecture which it resembles. The basic software of ‘Petit Mal’ was composed of technologies which determined its shape and dynamics. He insisted that evaluation of interactivity was subjective as in the Turing Test.

The last key aspect involved an understanding of integrated Cybernetics. Penny insisted that “Petit Mal is in some sense an anti-robot.” Most robots are elaborations of Von Neumann’s notion of the universal machine, in which the physical machine is simply a formless form to be filled with software "content". Penny described “Petit Mal as an attempt to build a robot which opposed that concept. Hardware and software were considered as a seamless continuity, its behaviour arising from the dynamics of its ‘body’.” This is closer to the modern understanding of cybernetics.

2-7. Conclusion and Problem Statement

This thesis draws on various fields of research including not only Architecture but also Engineering, Cybernetics and Computation as well as Biology and Art. As the system has shifted to one governed by computer and electronics this has been necessary. Each field's approach is interesting, but tends to break down into specific techniques and methods relevant to its focus. The state of the art across a number of fields needs to be considered in order to avoid biased and limiting methodologies.

Current Limitations of Temporal Design Models

1. Emergent conceptual planning should avoid the designer’s teleological tendencies. The designer should not propose his or her subject, but rather keep the architecture (both building and system) adaptive for future changes and iterations.

2. Existing time-based design models fall short both in terms of hardware and software strategies, for example lacking methods of evaluation and modification or a lack of connection between moving parts and structural parts. Also, in the current period, their adaptability comes through software meaning there is no true adaptability once the structure is built.