There is an anxiety often expressed around automation, that computers will replace humans in daily life. Robotic production in factories, for example, has led to a transformation of the American industrial labor economy in the last decades of the 20th century. Figures from architecture and engineering worlds have at times flirted with design automation, a narrative traced in Daniel Cardoso Llach’s Builders of the Vision.1 He presents the figures of computer engineer Douglas Ross and mathematician/designer Steven Coons as embodying opposing views of automation and augmentation. Ross, along with his research group at MIT, Electronic Systems Laboratory, understood design “as a noun: a geometric specification that could be calculated… if the design problem was adequately represented in a formal — as opposed to natural — language.” Coons, meanwhile, recognized design as “a verb: an open-ended and essentially human activity,” wherein the computer of the 1960s could play an increasingly supportive role but never fully replace the human. Half a century later, this debate has not been resolved — but I argue that only one of the positions suggests a multiplicity of futures and a meaningful role for the human, and that it is the latter path we should follow.
This section began by defining design as a fundamentally creative act, with the hypothesis that computation can enable designers to better approach complexity in their work. But what does creativity mean when a computer can exhibit it? For a designer, automating away every step of their work would be a nightmare — a critique Natalie makes in her manually ‘generated’ artwork. Even in the era of Ross and Coons, computers could generate multiple forms from a set of encoded rules. Today, it seems that machine learning, in taking on the previously exclusively human role of analyzing data and synthesizing rules, removes the designer from that side of the equation. On the opposite end, with automated algorithms for searching and selecting from a generated possibility space, it might seem that there’s nothing left for the designer to do whatsoever. But interrogate this narrative further, and cracks in the façade appear — new entry points and levels at which the designer can work.
A recent technological development will serve as an example to explore the changing role of the designer. In a 2014 paper, Ian Goodfellow and researchers at Université de Montréal introduced generative adversarial nets (GANs),2 a novel framework for neural network-based machine learning. As the name implies, one major component is a generative model: A technical structure that, after processing and analyzing a large collection of data (a process called ‘training’), can subsequently generate new objects that cohere stylistically with the given data. Much recent work has centered on image generation — for example, training a generative model on millions of Google Street View photos in order to create new, plausible, but completely imaginary Street View images. The other key component is a discriminative model, which appraises the output of the generative model and classifies the objects as ‘real’ or ‘generated.’ Both models are incentivized to optimize their performance: The generative model to generate objects that are classified as ‘real,’ and the discriminative model to maintain accuracy in its classifications. It is a conceptually sound framework, and would appear to automate a design-adjacent process of rulemaking, generating, exploring, and curating. Where does the human fit in? The example of Max’s music generation software is instructive: The designer no longer works directly with objects (images, audio clips), but instead acts as a coordinator of the computational technologies. To draw another musical parallel, the human is less a virtuosic player of an instrument than a symphony conductor, signaling and directing flows. By specifying and providing the data that the generative model is trained on, and tweaking the nuances of the discriminative model, the designer becomes almost a facilitator of a conversation between the technological agencies, a role rich with possibilities for working within and toward generative systems.
1. Cardoso Llach, Daniel. Builders of the Vision, Routledge, 2015. ↩
2. Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. "Generative adversarial nets." In Advances in neural information processing systems, pp. 2672-2680. 2014. ↩