Share:

Step-by-Step: The Algorithmization of Creativity under Francoist Developmentalism

    Diana Cristobal Olave Affiliation Bio

Downloads

Download data is not yet available.

Abstract

This article examines how computer technologies shaped architecture research practices in Spain in the late 1960s and early 1970s—a period when the Francoist regime sought to open its economy to Western democracies while also retaining ideological control of its cultural production, including architecture. Surveying the Calculation Center of the University of Madrid (CCUM), I show how computer techniques put pressure on architects to legitimate their work through algorithms—that is, through definite sequences of iterative steps. I trace how their desire to formalize algorithmic processes into inscriptions culminated with the invention of a kind of architectural drawing, that I term algorithmic drawing. These inscriptions depicted discrete units and interconnected links. Through an act of subdivision and stepwise concatenation, architects drew mental processes, human behavior and building plans as step-by-step chains. The result of this logic was that these inscrutable drawings paradoxically fostered notions of visibility and of transparency. Aiming to make design processes ‘transparent’, ‘visible’, and ‘exposed’, the architects at the CCUM envisioned a new mode of architecture drawing: more abstract in appearance that orthographic projection; concerned with non-metrical issues; capable of mobilizing and translating heterogeneous information; and suggestive of a replicable and exhaustive epistemology.

Keyword : algorithm, drawing, methodology, externalization, creativity

How to Cite
Cristobal Olave, D. (2020). Step-by-Step: The Algorithmization of Creativity under Francoist Developmentalism. Contour Journal, (5). https://doi.org/10.6666/contour.vi5.100
  Submitted
Jul 26, 2019
Published
Mar 13, 2020
Abstract Views
61
xml Downloads
17
pdf Downloads
11

Introduction

In 1966, an IBM 7090 computer was decommissioned from the European Council for Nuclear Research (CERN) in Geneva, Switzerland [1]. Regarded as obsolete because it could no longer perform its original task—high-energy physics research—the computer was transferred to the heart of the University of Madrid. IBM donated the computer along with annual research grants on the condition that the Spanish Ministry of Education fund the construction of a building that would house the machine and auxiliary spaces for research [2]. This became the Calculation Center at the University of Madrid (CCUM)—the first building constructed in Spain with the sole purpose of housing a computer, and the first national institution dedicated solely to apply computer techniques to educational and research purposes.

The CCUM brought together technicians with scholars who had no previous training in computer science, and linked computer techniques with disciplines as diverse as art, architecture, linguistics, and music. This heterogeneous group of people communicated with each other through a very specific mode of writing. Flowcharts, graphs and networked diagrams populated the pages of their personal manuscripts and notebooks, and of the periodicals published by the Center. All these various forms of inscription had one thing in common: They depicted discrete units and interconnected links. One could follow the links along their path, as if they were step-by-step instructions. Yet one could also see them juxtaposed with each other, rendering evident their representational differences. When produced by architects, this consistent—but plastic—graphic vocabulary was used to depict indiscriminately social and behavioral relationships between humans and spatial relationships between building parts, buildings, and cities. Yet the goal was always the same: to divide a complex problem into a definite sequence of iterative steps—that is, to frame architecture as a problem-solving activity that could be depicted through algorithms.

Algorithmic techniques shifted the goal of architectural design from controlling an outcome to the design of a (potentially) automatic decision-making process. Under Francoist Spain, this displacement of authorship away from personal agency and onto externalized algorithmic processes was framed by architects as a means to ‘liberate’ architectural design from authoritarian norms and constraints. Opposed to “private, incommunicable, and even esoteric” [3] personal design processes, algorithmic techniques fostered ethical implications through demands of transparency. Aiming to make design processes ‘transparent’, ‘visible’, and ‘exposed’, architectural researchers at the CCUM formalized algorithmic procedures into new forms of inscription that conveyed a consistent aesthetic vision. I call this new kind of inscription practice algorithmic drawing, and argue that it provided an apparatus to blend differences between objects and subjects through a flat two-dimensional step-by-step representation. Through an act of subdivision and stepwise concatenation, architects drew mental processes, human behavior and building plans as a series of sequential vertices and interconnected links—thus claiming correspondence between them.

The Algorithmization of Creativity

During the Center’s opening ceremony, the dean of the University of Madrid announced that the calculation services of the donated equipment were open to “non-routine tasks, to all university centers, superior technical schools and other teaching and research agencies,” [4] and noted that “no routine or commercial work [would] be accepted” [5]. Not only was this proposed use unique in Spain—where computers were only beginning to be acquired for banking and industry [6]—but it also opened the way to an unconventional and unprecedented interdisciplinary setting that brought together the sciences, arts, and humanities [7].

Whether in the courses on architecture, art, linguistics, or music, the objective was essentially the same: the “algorithmization of creativity,” [8] to use the expression coined by the subdirector of the center Ernesto Garcia Camarero. According to Camarero—a mathematician, computer scientist, and professor in the Madrid Technical School of Architecture—the work of the CCUM courses entailed resolving the difference between “two antagonistic words: algorithmia [algorithmia] and creativity [creatividad]:” [9].

“The first word [algorithmia] expresses the possibility of reducing the processes and solving the problems in a finite set of well-defined and simple rules. These rules would be such that after their prolix, orderly and mechanical application, the results are obtained from some data… When we speak of ‘creativity’, we refer to a human activity that is not well defined and with which results are obtained that are unknown in advance. Creativity always comes up surrounded by a mysterious halo, which encompasses intuition, the happy idea, the genius...” [10].

Camarero described the previous historical attempts to study the “imprecise border that separates algorithmic processes from those that are not,” [11] and to “approach the problem of creativity in a systematic way” [12] as ‘heuristic’ approaches—that is, as a set of vague rules that could help expedite the solution of a problem but that couldn’t be applied automatically (such as trial and error). Conversely, he saw in the computer and associated mathematical techniques the solution to such irreconcilable differences. He remarked that ‘the algorithmization of creativity’ would require an intellectual shift similar to the one that had taken place in twentieth century structural linguistics: the identification of mathematical structures that could represent separate elements set in relation. By associating attributes (i.e. letters, words, names, members of a social group, etc.) to the nodes of these structures, the participants of the CCUM claimed that specialists from widely distinct fields could find a ‘common language’ and work side by side.

The term algorithm had, for the CCUM, simultaneous and varied connotations depending on the community—particularly those inside and outside of technical professions, who shared common words but used them for different means. For the mathematician Garcia Camarero an algorithm was defined as “a set of rules that when applied to some data give the results of the solution to a problem.” Such rules had to be “simple, and need(ed) to be applied only a finite number of times” [13]. This definition had its origins in the early 1960s, when American institutions such as the Association of Computing Machinery used the work of prominent mathematicians—Alan Turing, Stephen Cole Kleene and Andrey Markov, among others—to define the theoretical boundaries of the newly formed discipline of computer science [14]. However, when artists and architects at the CCUM invoked the algorithmic they were not concerned with the mathematical formulation per se (precisely understood), but with the insertion of procedure into art and architectural design (broadly understood). What made something algorithmic was its commitment to procedure, and to the “if/then” logic of computation.

In the courses organized by the CCUM, algorithms were discussed as step-by-step procedures that could be enacted by someone (human) or something (computer) repetitively in different contexts. It was thus not simply an oppositional debate between machinic and human techniques, but the imposition of proceduralization, recursivity, and automation into architecture and art. Algorithms could be developed by hand or could be computed in the IBM machine. What mattered was to reframe these disciplines as problem-solving activities that could be depicted through a stepwise execution of instructions—a design sequence. This methodological trend re-described art and architecture as an iterative decision-making ‘process’ that was catalyzed by its engagement with mathematics.

Interest in the algorithmization of creative processes went hand-in-hand with the introduction of new notational systems. Flowcharts were rapidly embraced by artists and scientists because they provided a comprehensive visual depiction of the sequence of instructions to be performed by the computer. But other notational systems such as mathematical graphs and block and networked diagrams also proliferated, revealing slightly different aspects of the engagement of design practices with mathematics. What all these forms of inscription offered was a means to visualize process. The interdisciplinary researchers of the CCUM communicated with each other by drawing and editing each other’s graphic notations. Fascinated by the consequences of following these diagrams along their path, conventions and assumptions underlying circuits and circuitry spread across different disciplines. At the CCUM, architectural design processes, buildings, and people, equally lead to step-by-step reflexive drawings.

Fig 1“Algorithmic drawings" published in J. Segui and M.V.G. Guitian Experiencias en Diseño. Ensayo de Modelo Procesativo (Madrid, 1972) Courtesy of Javier Segui de la Riva.

These notational systems, that I call algorithmic drawings, acted simultaneously as a medium and as an object of discourse—diagrams of algorithmic processes from which emerged a theory of algorithmic processes. As techniques, these entered architecture as part of an antagonism towards final determinism—recasting objects as the result of a sequence of steps and decisions, probabilistic scenarios and combinatorial arrangements. As images, they were showcased together with the art works in the exhibitions produced by the center, and were published together with diagrams of electrical and computer circuits. Yet, what kind of ‘aesthetic’ concern can characterize these networked forms of inscriptions? I would like to propose that what guided this aesthetic was less a concern with efficiency than a faith in the power of visualization. Such skeletal drawings oscillated between the visual and the mathematical. They offered a structure by which to gather, compare, and superimpose things that were dissimilar, and used the ‘readability’ and ‘transparency’ of such structures as alibis for democratizing design competence.

Externalization

The association between algorithms and notions of visibility can be seen in the exercises that architects and professors Javier Segui de la Riva and Maria Victoria Guitian developed together with their students at the Technical School of Architecture during the years 1971 and 1972, as part of their CCUM activities. The goal of these exercises was to empirically study the process that students followed when designing—observing their steps and transcribing their decision-making processes into a series of drawings. Segui and Guitian termed this act of transcription ‘externalization’. Referring to the literature by the Design Methods movement and quoting their founder Christopher Jones, Segui spoke of the need to:

“…make public the hitherto private thinking of designers; to externalize the design process. In some cases this is done in words, sometimes in mathematical symbols, and almost always with a diagram representing parts of the design problem and the relationships between them” [15].

Jones had advanced this idea during the 1967 Design Methods Symposium in Portsmouth, UK. According to Jones, in the face of the increasing complexity of design problems, externalization was the only possible way to exercise critique and avoid ‘expensive mistakes’. Jones claimed that this process was in fact the common aim of the Design Methods proponents—otherwise characterized by a great variety set of techniques. He referred to the problem of externalization as a “business of language construction,” and argued that devising this new language would serve to bridge “the gap between applied art and applied science,” by combining artistic modes of thought with “scientific doubt and rational explanations” [16].

Similar claims were made across all CCUM courses. Garcia Camarero for instance spoke of the need of “bringing forth” [hacer emerger] the “unconscious rules of artistic creation to more conscious levels” in order to “allow a better management of this methodology,” [17] and even compared this process to that of psychoanalysis. Within the architecture group, Segui and Guitian devised a series of exercises intended to externalize the otherwise “private, incommunicable, and even esoteric” [18] architectural design practices. These exercises varied throughout the years: During the first semester, architecture students were asked to “draw their ideal house in full freedom, in a short period of time and trying to describe their process” [19]. The second iteration included a variation where students needed to also describe the “order in which the final drawing had been achieved” [20]. The third and fourth iterations substituted the house for an existing building, and then another student’s project for their own. Later, these experiences morphed into other ones that relied less on conscious descriptions and more on observation:

“To analyze what the architects did when designing, we decided to set up a table with a glass plane, place an automatic camera underneath it (the camera had a rope to be able to shoot it from a distance) and we made designers with diverse experience draw in this device. While they drew (and spoke) we were taking pictures of the process with a certain cadence (every minute, for example) and then, once these photographs were revealed in transparent paper, we compared them” [21].

Throughout these uncanny exercises Segui and Guitian combined the literature on ‘design methodologies’ with their interest in ‘drawing’. Trained not only in architecture—through an eminently technical pedagogical model—but also in psychology and art, Segui and Guitian positioned themselves strategically as figures that could mediate between humanistic and technocratic concerns [22]. They embarked on the task of transcribing the sequence of steps described by the students, and aimed to “develop an algorithm that ... could automatically generate an analogous or identical design to another elaborated by any designer” [23]—that is, a well-defined sequence of rules that could describe the process by which the participants arrived to their housing solutions. This algorithm would eventually be programmed into the computer, which in turn, would automatically ‘generate’ housing designs according to these rules.

Segui and Guitian described these drawings as “symbolic formalizations… where parts, relations, and global behavior can be distinguished” [24]. They began by subdividing the continuous process of the participants into a series of discrete steps (such as ‘definition of units’, ‘hierarchy’, ‘economical restrictions’, etc.). Once these steps were agreed upon, drawings of step by step sequences multiplied. Every single node in the scribbled drawings was meant to correspond to the execution of a particular ‘mental step’. One could follow them along a sequence—step by step—or get caught in a loop with no way out. Segui and Guitian, fascinated by the consequences of following the circuit along its path, declared that two types of design processes should be distinguished: “those described in a linear manner, and those described in a circular manner or through successive approximations” [25]. Segui and Guitian saw in these algorithmic drawings the capacity to externalize dynamic processes—that is, processes that could ‘readjust’, ‘correct’ and ‘regulate’ themselves. In other words, they believed these drawings could solve the impossibility of representing a process that changed itself.

Whether linear or cyclical, these diagrams of circuits do not make sense without a discrete notion of time. This notion stands at odds with existing architecture and design historical accounts, which have highlighted the visual culture that surrounded computation during these decades as non-sequential. Reinhold Martin for example describes how the prominent design theorist Georgy Kepes called for a shift from ‘thing-seeing’ to ‘pattern-seeing’ [26] in his 1965 book The New Landscape in Art and Science, portraying an aesthetic in which everything was connected to everything else “in all directions, a pattern”[27]. Similarly, Orit Halpren describes the multi-screen projections of Ray and Charles Eames as “forcing the eye to move rhizomatically, making unexpected, and non-linear connections” [28]. This emphasis on multimedia forms of electronic displays, reinforced by media theorists such as McLuhan—who declared a clear difference between the ‘linear’ culture of the book, and the ‘non-linear’ culture of electronics [29]—have managed to conceal the otherwise highly normative, modular, ordered and sequential nature of computer instructions and of algorithmic processes. To put it in the words of Segui, “the process will be organized… according to the order and priority established” [30].

The result of this logic was that these inscrutable diagrams paradoxically fostered notions of visibility and of transparency. Despite the ambiguity and opacity of the drawings themselves, and the multiple difficulties that Segui and Guitian found in their way (students that refused to participate, others that couldn’t express or remember why they took certain decisions, etc.) these inscriptions mattered because of the visual knowledge they perpetuated. By externalizing the algorithmic circuits of the mind, these drawings functioned as self-evident propositions. The mind, like the computer, could only be understood if circuits were involved—here, computer engineers, architects, and psychologists agreed. Yet, for architects, this claim was also a political one, a form of epistemological emancipation, or enlightenment. These drawings were seen by the participants of this study as an instrument that could correct architecture’s methodological deficiencies, by ensuring that physical form was derived from ‘hard’ modes of expertise—algorithmic sequences—as opposed to the architects’ idiosyncratic preferences.

Making Explicit, Rendering Transparent

The conflation of computation with the idea of rendering information ‘transparent’, ‘visible’, and ‘exposed’, could also be seen in the building’s architecture and in the design of the IBM machines. Both Reinhold Martin’s and John Hardwood’s research on the architecture of the corporation have extensively illustrated how IBM’s buildings emerged united by a common design logic. Martin’s insistence on the curtain wall has shown how this organizational device served as a carrier of the corporate image, and Hardwood’s insistence on the monastic and prison-like corporation’s courtyards has shown how this typology was used to set IBM’s corporate interiors in stark contrast to their countryside sites [31]. However, the space of the corporation in the Spanish scenario found a different logic.

The Calculation Center was designed by Spanish architect Miguel Fisac. By 1966, Fisac had already collaborated with the Superior Board of Scientific Investigations (CSIC) and the Ministry of Education to build some of the most important buildings dedicated to scientific research in Madrid. With the rise of technocratic governance in the 1950s, and the incorporation of a large number of Opus Dei members to government positions, the close relationship that Fisac maintained with the Catholic Institution garnered him significant clients and important architecture commissions. It was his previous experience with scientific research buildings, along with the laboratories and factories that he designed during the early 1960s, that appealed to IBM and accorded him the construction of the Calculation Center and the IBM Office Center located in Madrid’s city center [32].

Fig 2View of the computer room at the Calculation Center at the University of Madrid (CCUM), c. 1969. Courtesy of Florentino Briones Martinez.

Unlike their American and European counterparts, the Madrid Calculation Center and the IBM Office Center were characterized for being eminently urban. Both buildings, located in strategic areas, were characterized by a glazed plinth that displayed to passersby the computers at work. At night, a unanimous landscape of neon illuminated the interiors and called attention to the affective qualities of the computer, most likely instigating in the public the desire to purchase corporate products. The building’s independent, self-organized, and seemingly ever-expanding interiors were visually exposed to the street, converting the machine into a catalyzer of a multiplicity of gazes. In both buildings the space of the computer was re-imagined as a public exhibition, as a mediated space that appeared seemingly direct and seductive while remaining physically inaccessible. The computer—a non-visual and non-transparent machine—paradoxically fostered ‘visual culture’ and ‘transparency’.

The IBM machine was displayed on the ground floor of the Madrid Calculation Center next to spaces for reception, programming, and administration. The first floor was used for research and office facilities, including a library and a conference room, and the underground floor was dedicated to storage and technical purposes. The machine was an IBM 7090, designed by the American architect and industrial designer Eliot Noyes [33]. Its modules were deployed in a white room with plastic floors and gridded ceiling, and were arranged in a modular fashion, forming a semicircle around the space. The tubes and wires that constituted the CPU were broken down into several individual, freestanding volumes linked together by wiring hidden underneath a raised floor. The individuality and modular logic of the machine demonstrated the interactive form of an information circuit. The impression of lightness and mobility was emphasized through dark bases that receded behind the bright enameled metal skin of the cabinets, appearing as if floating above the white floor and beneath the ceiling grid. The auxiliary machines were located in the annex rooms, physically connected to the main computer area but visually hidden from the street. At the center of the scattered modules was located the operator’s console, an input unit that consisted of a desk with a typewriter keyboard and a series of switches from which the computer was to be controlled. From this privileged central position, the human operator could control the dispersed machine.

The machine was designed to reveal and conceal at the same time. The glass surfaces of IBM 7090 exposed its magnetic disks, colored-coded wires, plastic, and metal connectors to the operator directly, enabling a seemingly direct and theatrical view into the core of the mechanism of the machine. This theatrical view was enhanced by the serialization of the CPU modular units, that openly displayed their brightly colored interiors in a sequential manner. Unlike other layouts that Noyes designed for IBM America, in which the machine modules were randomly dispersed in ever-expanding grids, at the CCUM the cabinets were arranged into a semicircle with the operator’s console at its center. This unusual arrangement rendered visual the step-by-step nature of the process of computation. From the central console, the operator was able to see the magnetic disks rotate one at a time, and one after the other—and thus follow the sequence from the moment when the perforated cards where inserted into the card reader, to the moment when the result of the computation was sent to the printer.

Both the CCUM rooms and drawings rendered visible the sequential process of computation—one step at a time. This act of decomposition and stepwise concatenation fostered ethical implications through the demands of transparency. The CCUM participants saw in the algorithm the potential to correct the ‘mysterious halo’ and ‘private intuition’ that they argued surrounded creative practices. Similarly, they saw in the computer the potential to resolve the differences between the systematic and mechanic world of ‘algorithmia’ and the vague and unpredictable world of ‘creativity’. They reframed creative disciplines such as architecture as problem-solving activities that could be depicted through a stepwise execution of instructions, and positioned architecture as a field of inquiry that would intertwine aesthetic and scientific concerns.

Algorithmic Exhaustion

The statements made by the CCUM participants, which invoked algorithms along with creative promises, have multiple echoes in contemporary computational design practices—albeit with slightly different emphases. Scholars of digital media have extensively debated how contemporary computational practices and discourses oscillate from a desire to increase anticipation and control, to a fascination with contingency, chance, and unforeseen results—even linking the pragmatic logics of computer code with irrational and ‘magical thinking’. Wendy Chun talks about “software as sourcery”; Ian Bogost about “computational theocracy”; Ed Finn about “code as magic” [34]. Similar to these authors, who make the argument that algorithmic procedures and mysticism go hand in hand, contemporary architecture practices also invoke algorithms together with statements about unpredictability, randomness, and uncertainty. It should be noted, for example, the ubiquitous rhetoric of recent parametric design, which oscillates between algorithmic processes that are fully anticipated, but that lead to unexpected results. Greg Lynn, for instance, has described how his Embryological House was motivated by the desire to use computer-aided rule-based procedures that would lead to unpredictable formal repertoires. Comparing the computer to a pet that is both “domesticated and wild,” he has argued for the computer as an instrument that could bring together “a degree of discipline and unanticipated behavior to the design process” [35]. And the same could be said about the formal-procedural repertoire celebrated by computer-aided architects such as Zaha Hadid’s Architects or Michael Meredith’s firm MOS—among many others. The presumption here is that algorithmic procedures would lead to unforeseen—yet not arbitrary—meaningful formal discovery.

Yet, for the CCUM participants, creativity had less to do with contingency than with exhaustion. Architecture researchers equated the notion of algorithmic creativity to the capacity to produce an unlimited number of combinatorial arrangements through a limited number of well-defined and simple rules. They promised a type of algorithmic creativity that involved the development of exhaustive sets of possibilities—as opposed to an optimal efficient result. What mattered was to take into consideration all possible relations, to exhaust the possible even if with no means, or goal. Exhaustion—unlike efficiency or optimization—was time-consuming and costing; yet, it was considered worthy because it promised variability within a repetitive step-by-step process. In the pursuit of exhaustion, algorithmic drawings complied with the indeterminate by providing multiple solutions to a problem, all the while remaining precise through a repetitive step-by-step process. Seen from that standpoint, their notion of creativity involved a type of infinity that could be predicted with recursion, through a set of step-by-step rules—an algorithm.

Today, the fascination with exhaustion occurs most perversely in the name of a ‘pseudo-personalization’ embraced by computer-aided architects. Architecture historian Reinhold Martin has placed recent digital architecture practices, such as Greg Lynn’s no-two-are-the-same Embryonic houses of 2000, in comparison with computerized corporate architecture, such as Kevin Roche’s no-two-are-the-same offices for Union Carbide, and described such projects from the point of view of ‘mass customization’. Martin’s association arrived at the formulation that such projects were designed to “make available to the consumer a rainbow of aesthetic and/or technical choices within parametrically variable tolerances… [that] can be adjusted in a digital model to suit ever more personal preferences” [36]. In other words, that the goal of the computer was to design almost personalized products for hyper-individuated consumers composed of ever finer data sets. The dream of algorithmic exhaustion originated with a promise to take everything—and everyone—into account. Today, we should acknowledge the histories and genealogies of such practices and interrogate whether this aspiration, that begun already with the first design computer practices of the 1960s, is designed to enable choice, or to constraint it within a surplus of mass-customized forms and lifestyles.

References

  1. Secretary of the Department of Mathematics of the University of Madrid, typed document, 1965. From General Archives, Universidad Complutense de Madrid. .
  2. The contract stipulated that IBM would donate an annual figure of three million pesetas to be destined for research grants, discounts on maintenance operations, and the salary for four IBM technicians. Legal contract, Fernando de Asúa Sejorant, representing International Business Machines, S.A.E. and Enrique Gutierrez Ríos, representing the University of Madrid. January 13th 1966. From General Archives, Universidad Complutense de Madrid. .
  3. Javier Seguí de la Riba, Cuaderno 3, Seminario de Análisis y Generación Automática de Formas Arquitectónicas, Reflexiones en Torno al diseño (Madrid : CCUM, 1972): 40 .
  4. Becas, Cursos, Usuarios (Madrid: CCUM, Nov 1968): 4 .
  5. Becas, Cursos, Usuarios (Madrid: CCUM, No date): 11 .
  6. See for instance Ton Sales, “La informatica commercial espanola en la primera decada (1960-1970): Apuntes para una historia de la Informatica en Espana,” Novatica vol. 34 (1980): 53-59 .
  7. The organization of the CCUM was partially based in other previous Calculation Centers, such as the one in the National University of Buenos Aires, Argentina, where Ernesto Garcia Camarero had worked from 1960 to 1962 teaching programming languages. Some of the interdisciplinary themes that characterized the CCUM—such as the use of computers in linguistics and literature—had its origin in this aforementioned institution. However, the introduction of artists, architects and musicians in the CCUM happened through informal personal proposals. On the Calculation Institute of the National University of Buenos Aires see Ernesto Garcia Camarero, “Algunos Recuerdos sobre los origenes del cálculo automático en Argentina, y sus antecedents en España e Italia” in Revista Brasileira de História da Matemática, Vol. 7 no 13 (April-Sept 2007): 109-130. On the participants’ personal accounts concerning their engagement with the CCUM see J. Bardenes and J. Luis Martinez, ed. El Centro de Cálculo 30 años después (Alicante: Museo de la Universidad de Alicante, 2003) .
  8. Ernesto García Camarero. “L’art cybernétique” SIGMA 9, Contact II, Art et ordinateur (Bordeaux, 1973). .
  9. Ernesto García Camarero. “L’ordinateur et la créativité » in L’ordinateur et la créativité. (Madrid : CCUM, 1970) : 5 .
  10. Ibid., 5 .
  11. Ernesto García Camarero. “Algoritmizacion de los procesos de diseño” in Boletín de Centro de Cálculo de la Universidad de Madrid, Issue 15 (Madrid : CCUM, June 1971): 24 .
  12. Ernesto García Camarero. “L’ordinateur et la créativité » in L’ordinateur et la créativité. (Madrid : CCUM, 1970) : 5 .
  13. Ibid., 6 .
  14. Letters to the Editor, Hartmut Huber “Algorithm and Formula,” and Donald E. Knuth “Algorith and Program” in Communications of the ACM, vol 9, num 4 (April, 1966) .
  15. Javier Seguí de la Riba, Cuaderno 3, Seminario de Análisis y Generación Automática de Formas Arquitectónicas, Reflexiones en Torno al diseño (Madrid : CCUM, 1972): 99. For the original quote see J. Christopher Jones, ed. “The state-of-the-art in design methods.” In Anthony Ward, ed. Design Methods in Architecture, Series Architectural Association Paper no. 4 (New York, G. Wittenborn; 1969): 193 .
  16. J. Christopher Jones, ed. “The state-of-the-art in design methods.” In Anthony Ward, ed. Design Methods in Architecture, Series Architectural Association Paper no. 4 (New York, G. Wittenborn; 1969): 193 .
  17. Ernesto García Camarero. CCUM conference, June 26 1969. Published as «Seminario sulla generazione delle forme plastiche», in D’ARS, num. 46-47, (Milán, July-Nov 1969) 40-45 .
  18. Javier Seguí de la Riba, Cuaderno 3, Seminario de Análisis y Generación Automática de Formas Arquitectónicas, Reflexiones en Torno al diseño (Madrid : CCUM, 1972): 40 .
  19. J. Segui and MVG Guitian, “Investigacion en procesos de diseño. Modelo operativo de formalizacion”, in Boletín de Centro de Cálculo de la Universidad de Madrid, Issue 24 (Madrid : CCUM, Jan 1974): 6 .
  20. Ibid., 7 .
  21. Javier Segui de la Riva, interviewed by Aramis López Juan, in Del cálculo numérico a la creatividad abierta (Madrid: UCM, 2012): 145 .
  22. In this article I refer only to Javier Segui’s research projects and teaching. Segui graduated from the Madrid Technical School of Architecture in 1964, obtaining his PhD in 1966. He complemented his studies in architecture with studies of Psychology, Sociology, and IBM courses on Fortran, shaping a particular interest for pseudo-scientific design “methodologies”—from Alexander Klein to Morris Asismov and Chirstopher Alexander, among others. At the School of Architecture, he taught drawing classes in the Department of “Analysis of Forms,” where he won the professorship in 1974. In addition to a career as teacher, architecture researcher, and painter of some note, Segui also participated in a number of building projects related to the Second Development Plan. See Juan Daniel Fullaondo, Javier Segui de la Riva (1965-1983): arquitecto (Madrid: Graficinco, 1999). .
  23. J. Segui and MVG Guitian, Experiencias en diseño. Ensayo de modelo procesativo (Madrid: ETSAM, Talleres de Arquitectura, 1972): 104 .
  24. J. Segui and MVG Guitian, “Investigacion en procesos de diseño. Modelo operativo de formalizacion”, in Boletín de Centro de Cálculo de la Universidad de Madrid, Issue 24 (Madrid: CCUM, Jan 1974): 4 .
  25. Ibid., 11 .
  26. Reinhold Martin. “Pattern-Seeing” in The organizational complex: architecture, media, and corporate space (Cambridge, Mass.: MIT Press, c2003): 42-80 .
  27. Ibid., 122 .
  28. Orit Halpren. Beautiful Data: a history of vision and reason since 1945 (Durham: Duke University Press, 2014.): 124 .
  29. Marshall McLuhan, The Gutenberg galaxy: the making of typographic man (Toronto: University of Toronto Press, 1962) .
  30. J. Segui and MVG Guitian, “Investigacion en procesos de diseño. Modelo operativo de formalizacion”, in Boletín de Centro de Cálculo de la Universidad de Madrid, Issue 24 (Madrid : CCUM, Jan 1974): 19 .
  31. See Reinhold Martin. The Organizational Complex. Architecture, Media and Corporate Space (Cambridge, Mass.: MIT Press, 2005), and John Harwood. The Interface. IBM and the Transformation of Corporate Design 1945-1976 (Minneapolis: University of Minnesota Press, 2011) .
  32. See Miguel Fisac, ed. Arquitectura Viva SL, Issue 101 (2003): 84 .
  33. In 1953, IBM made a decisive shift to computing as its primary field of activity, and it was at this moment that Eliot Noyes arrived on the scene at IBM as a major player in the redesign of the showroom at IBM Corporate Headquarters in New York, to showcase the IBM 702. In 1956, Noyes was appointed consultant director of design and began the task of redesigning the corporation, which encompassed not only the redesign of the machines themselves, but also graphics, interiors, exhibits, and buildings. See Gordon Bruce, “International Business Machines”, Eliot Noyes: A Pioneer of Design and Architecture in the Age of American Modernism (London; New York: Phaidon, 2006.) .
  34. See Wendy Hui Kyong Chun, “On Sourcery, or Code as Fetish” in Configurations, Volume 16, Number 3, (Fall 2008): 299-324; Ian Bogost “The Cathedral of Computation” in The Atlantic (January 2015); Ed Finn, “Introduction” in What Algorithms Want: Imagination in the Age of Computing (Cambridge, Massachusetts: The MIT Press, 2017) .
  35. Greg Lynn, Animate Form (New York: Princeton Architectural Press, 1999): 19-20 .
  36. Reinhold Martin, Utopia's Ghost: Architecture and Postmodernism, Again. (Minneapolis: University of Minnesota Press, 2010): 127 .

Send mail to Author


Send Cancel