2022
Bubble2Floor: A Pedagogical Experience With Deep Learning for Floor Plan Generation
Pedro Veloso, Jinmo Rhee, Ardavan Bidgoli, Manuel Ladron de Guevara.
The 27th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA) 2022
2020
Towards a Distributed, Robotically Assisted Construction Framework
Zhihao Fang, Yuning Wu, Ammar Hassonjee, Ardavan Bidgoli, Daniel Cardoso-Llach.
The 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA).
Bidgoli, Aredavan, Manuel Ladron De Guevara, Cinnie Hsiung, Jean Oh, Eunsu Kang.
The 29th IEEE International Conference on Robot and Human Interactive Communication
V-Dream: Immersive Exploration of Generative Design Solution Space.
Keshavarzi, Mohammad, Ardavan Bidgoli, and Hans Kellner.
The 22nd International Conference on Human-Computer Interaction.
2019
Machinic Surrogates: Human-Machine Relationships in Computational Creativity
Bidgoli, Ardavan, Eunsu Kang, Daniel Cardoso Llach.
International Symposium on Electronic Arts (ISEA 2019), Gwangju, South Korea.
2018
DeepCloud: the application of a data-driven generative model in design
Bidgoli, Ardavan, and Pedro Veloso. “DeepCloud: The Application of a Data-Driven Generative Model in Design.”
Recalibration: On Imprecision and Infidelity Paper Proceedings Book for the 2018 Association of Computer Aided Design in Architecture Conference, IngramSpark, 2018, pp. 176–85.
Image Classification for Robotic Plastering with Convolutional Neural Network
Bard, Joshua, et al. “Image Classification for Robotic Plastering with Deep Neural Networks.”
Robotic Fabrication in Architecture, Art and Design 2018, Springer, Cham, 2018, pp. 3–15, https://link.springer.com/chapter/10.1007/978-3-319-92294-2_1.
2017
Assisted Automation: Three Learning Experiences in Architectural Robotics
Cardoso Llach, Daniel, Ardavan Bidgoli, Shokofeh Darbari. “Assisted Automation: Three Learning Experiences in Architectural Robotics.”
International Journal of Architectural Computing, vol. 15, no. 1, SAGE Publications Sage UK: London, England, 2017, pp. 87–102.
2015
Towards an Integrated Design-Making Approach in Architectural Robotics
Bidgoli, Ardavan. Towards an Integrated Design-Making Approach in Architectural Robotics. Pennsylvania State University, 2015.
Towards a Motion Grammar for Robotic Stereotomy
Bidgoli, Ardavan, and Daniel Cardoso Llach. “Towards A Motion Grammar for Robotic Stereotomy.” Emerging Experience in Past, Present, and Future of Digital Architecture, Proceedings of the 20th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), 2015, pp. 723–32.
Bubble2Floor: A Pedagogical Experience With Deep Learning for Floor Plan Generation
This paper reports a pedagogical experience that incorporates deep learning to design in the context of a recently created course at the Carnegie Mellon University School of Architecture. It analyses an exercise called Bubble2Floor (B2F), where students design floor plans for a multi-story row-house complex.
The pipeline for B2F includes a parametric workflow to synthesise an image dataset with pairs of apartment floor plans and corresponding bubble diagrams, a modified Pix2Pix model that maps bubble diagrams to floor plan diagrams, and a computer vision workflow to translate images to the geometric model.
In this pedagogical research, we provide a series of observations on challenges faced by students and how they customised different elements of B2F, to address their personal preferences and problem constraints of the housing complex as well as the obstacles from the computational workflow. Based on these observations, we conclude by emphasising the importance of training architects to be active agents in the creation of deep learning workflows and make them accessible for socially relevant and constrained design problems, such as housing.
--------
Veloso, Pedro, Jinmo Rhee, Ardavan Bidgoli, and Manuel Ladron De Guevara. 2022. “Bubble2Floor: A Pedagogical Experience With Deep Learning for Floor Plan Generation.” In Proceedings of the 27th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Sydney.
Towards a Distributed, Robotically Assisted Construction Framework
In this paper we document progress towards an architectural framework for adaptive and distributed robotically assisted construction. Drawing from state-of-the-art reinforcement learning techniques, our framework allows for a variable number of robots to adaptively execute simple construction tasks.
The paper describes the framework, demonstrates its potential through simulations of pick-and-place and spray-coating construction tasks conducted by a fleet of drones, and outlines a proof-of-concept experiment. With these elements the paper contributes to current research in architectural and construction robotics, particularly to efforts towards more adaptive and hybrid human-machine construction ecosystems. The code is available at: https://github.com/c0deLab/RAiC
--------
Zhihao Fang, Wu, Yuning, Hassonjee, Ammar, Bidgoli, Ardavan, Cardoso-Llach, Daniel. 2020. “Artistic Style in Robotic Painting; a Machine Learning Approach to Learning Brushstroke from Human Artists.” In proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA).
Artistic Style in Robotic Painting; a Machine Learning Approach to Learning Brushstroke from Human Artists
Robotic painting has been a subject of interest among both artists and roboticists since the 1970s. Researchers and interdisciplinary artists have employed various painting techniques and human-robot collaboration models to create visual mediums on canvas. One of the challenges of robotic painting is to apply a desired artistic style to the painting. Style transfer techniques with machine learning models have helped us address this challenge with the visual style of a specific painting. However, other manual elements of style, i.e., painting techniques and brushstrokes of an artist have not been fully addressed.
We propose a method to integrate an artistic style to the brushstrokes and the painting process through collaboration with a human artist. In this paper, we describe our approach to 1) collect brushstrokes and hand-brush motion samples from an artist, and 2) train a generative model to generate brushstrokes that pertains to the artist’s style, and 3) integrate the learned model on a robot arm to paint on a canvas. In a preliminary study, 71% of human evaluators find our robot’s paintings pertaining to the characteristics of the artist’s style.
--------
Bidgoli, Ardavan, Manuel Ladron De Guevara, Cinnie Hsiung, Jean Oh, and Eunsu Kang. forthcoming. 2020. “Artistic Style in Robotic Painting; a Machine Learning Approach to Learning Brushstroke from Human Artists.” In Proceedings of the 29th International Conference on Robot and Human Interactive Communication (RO-MAN). Naples.
V-Dream: Immersive Exploration of Generative
Design Solution Space
Generative Design workflows have introduced alternative paradigms in the domain of computational design, allowing designers to generate large pools of valid solutions by defining a set of goals and constraints. However, analyzing and narrowing down the generated solution space, which usually consists of various high-dimensional properties, has been a major challenge in current generative workflows. By taking advantage of the interactive unbounded spatial exploration, and the visual immersion offered in virtual reality platforms, we propose V-Dream, a virtual reality generative analysis framework for exploring large-scale solution spaces. V-Dream proposes a hybrid search workflow in which a spatial stochastic search approach is combined with a recommender system allowing users to pick desired candidates and eliminate the undesired ones iteratively. In each cycle, V-Dream reorganizes the remaining options in clusters based on the defined features. Moreover, our framework allows users to inspect design solutions and evaluate their performance metrics in various hierarchical levels, assisting them in narrowing down the solution space through iterative cycles of search/select/re-clustering of the solutions in an immersive fashion. Finally, we present a prototype of our proposed framework, illustrating how users can navigate and narrow down desired solutions from a pool of over 16000 monitor stands generated by Autodesk’s Dreamcatcher software.
--------
Keshavarzi, Mohammad, Ardavan Bidgoli, and Hans Kellner. forthcoming. 2020. “V-Dream: Immersive Exploration of Generative Design Solution Space.” In Proceedings of 22nd International Conference on Human-Computer Interaction.
Machinic Surrogates: Human-Machine Relationships in Computational Creativity
Recent advancements in artificial intelligence (AI) and its sub-branch machine learning (ML) promise machines that go beyond the boundaries of automation and behave autonomously. Applications of these machines in creative practices such as art and design entail relationships between users and machines that have been described as a form of "collabora-tion" or "co-creation" between computational and human agents [1, 2]. This paper uses examples from art and design to argue that this frame is incomplete as it fails to acknowledge the socio-technical nature of AI systems, and the different human agencies involved in their design, implementation, and operation. Situating applications of AI-enabled tools in creative practices in a spectrum between automation and autonomy , this paper distinguishes different kinds of human engagement elicited by systems deemed "automated" or "auton-omous." Reviewing models of artistic collaboration during the late 20 th century, it suggests that collaboration is at the core of these artistic practices. We build upon the growing literature of machine learning and art to look for the human agencies inscribed in works of "computational creativity", and expand the "co-creation" frame to incorporate emerging forms of human-human collaboration mediated through technical artifacts such as algorithms and data.
--------
Bidgoli, Ardavan, Eunsu Kang, and Daniel Cardoso Llach. 2019. “Machinic Surrogates: Human-Machine Relationships in Computational Creativity.” In Proceedings of 25th International Symposium on Electronic Art ISEA2019. Gwangju, South Korea.
DeepCloud: the application of a data-driven generative model in design
Generative systems have a significant potential to synthesize innovative design alternatives. Still, most of the common systems that have been adopted in design require the designer to explicitly define the specifications of the procedures and in some cases the design space. In contrast, a generative system could potentially learn both aspects through processing a database of existing solutions without the supervision of the designer. To explore this possibility, we review recent advancements of generative models in machine learning and current applications of learning techniques in design. Then, we describe the development of a data-driven generative system titled DeepCloud. It combines an autoencoder architecture for point clouds with a web-based interface and analog input devices to provide an intuitive experience for data-driven generation of design alternatives. We delineate the implementation of two prototypes of DeepCloud, their contributions, and potentials for generative design.
--------
Bidgoli, Ardavan, and Pedro Veloso. “DeepCloud: The Application of a Data-Driven Generative Model in Design.” Recalibration: On Imprecision and Infidelity Paper Proceedings Book for the 2018 Association of Computer Aided Design in Architecture Conference, IngramSpark, 2018, pp. 176–85.
Image Classification for Robotic Plastering with Convolutional Neural Network
Inspecting robotically fabricated objects to detect and classify dis- crepancies between virtual target models and as-built realities is one of the challenges that faces robotic fabrication. Industrial-grade computer vision methods have been widely used to detect manufacturing flaws in mass pro- duction lines. However, in mass-customization, a versatile and robust method should be flexible enough to ignore construction tolerances while detecting specified flaws in varied parts. This study aims to leverage recent developments in machine learning and convolutional neural networks to improve the resiliency and accuracy of surface inspections in architectural robotics. Under a supervised learning scenario, the authors compared two approaches: (1) transfer learning on a general purpose Convolutional Neural Network (CNN) image classifier, and (2) design and train a CNN from scratch to detect and categorize flaws in a robotic plastering workflow. Both CNNs were combined with conventional search methods to improve the accuracy and efficiency of the system. A web- based graphical user interface and a real-time video projection method were also developed to facilitate user interactions and control over the workflow.
-------
Joshua, et al. “Image Classification for Robotic Plastering with Deep Neural Networks.” Robotic Fabrication in Architecture, Art and Design 2018, Springer, Cham, 2018, pp. 3–15, https://link.springer.com/chapter/10.1007/978-3-319-92294-2_1.
Assisted Automation: Three Learning Experiences in Architectural Robotics
Fueled by long-standing dreams of both material efficiency and aesthetic liberation, robots have become part of mainstream architectural discourses, raising the question: How may we nurture an ethos of visual, tactile, and spatial exploration in technologies that epitomize the legacies of industrial automation—for example, the pursuit of managerial efficiency, control, and an ever-finer subdivision of labor? Reviewing and extending a growing body of research on architectural robotics pedagogy, and bridging a constructionist tradition of design education with recent studies of science and technology, this article offers both a conceptual framework and concrete strategies to incorporate robots into architectural design education in ways that foster a spirit of exploration and discovery, which is key to learning creative design. Through reflective accounts of three learning experiences, we introduce the notions “assisted automation” and “robotic embodiment” as devices to enrich current approaches to robot–human design, highlighting situated and embodied aspects of designing with robotic machines.
-------
Cardoso Llach, Daniel, et al. “Assisted Automation: Three Learning Experiences in Architectural Robotics.” International Journal of Architectural Computing, vol. 15, no. 1, SAGE Publications Sage UK: London, England, 2017, pp. 87–102.
Towards an Integrated Design-Making Approach in Architectural Robotics
Using industrial robots in creative design has generated a wide interest among designers, artists, and architects. While generic, combined with custom or task-specific mounted tools and digital descriptions, these machines have recently become the vehicle of creative explorations in design, architecture, and the arts. Even though numerous researchers and practitioners have proposed applications of robotics in architectural practice, this field is still in its infancy and thus needs more exploration by design and architectural researchers. In this thesis, I have investigated the architectural robotics opportunities by reviewing its design space and characteristics in academia and practice. It resulted in a hypothesis stating that currently available software toolboxes are not sufficient mediums between architects and robots. Accordingly, we need a medium to embed all the constraints that affect a specific robotic system, its mounted tool, and related material system, from early stages of design to materialization. To test this hypothesis, I proposed an analytical grammar to codify spatial design, form finding process, and robotic fabrication behavior through visual computation and algorithmic approaches. The system affordances were later studied through physical prototypes.
-------
Bidgoli, Ardavan. Towards an Integrated Design-Making Approach in Architectural Robotics. Pennsylvania State University, 2015.
Towards a Motion Grammar for Robotic Stereotomy
This paper presents progress towards the definition of a mo- tion grammar for robotic stereotomy. It describes a vocabulary of mo- tions able to generate complex forms by cutting, slicing, and/or carving 3-D blocks of material using a robotic arm and a custom made cutting tool. While shape grammars usually deal with graphical descriptions of designs, a motion grammar seeks to address the 3-D harmonic move- ments of machine, tool, and material substrate choreographically, sug- gesting motion as a generative vehicle of exploration in both designing and making. Several models and prototypes are presented and dis- cussed.
-------
Bidgoli, Ardavan, and Daniel Cardoso Llach. “Towards A Motion Grammar for Robotic Stereotomy.” Emerging Experience in Past, Present, and Future of Digital Architecture, Proceedings of the 20th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), 2015, pp. 723–32.