{"id":8925,"date":"2021-11-18T20:23:40","date_gmt":"2021-11-18T20:23:40","guid":{"rendered":"http:\/\/peerproduction.net\/editsuite\/?page_id=8925"},"modified":"2022-02-26T21:32:36","modified_gmt":"2022-02-26T21:32:36","slug":"personal-cobot","status":"publish","type":"page","link":"http:\/\/peerproduction.net\/editsuite\/issues\/issue-15-transition\/peer-reviewed-papers\/personal-cobot\/","title":{"rendered":"‘Meet Your Personal Cobot:’ Framing Participatory Research in Makerspaces as a Trading Zone"},"content":{"rendered":"

Tudor B. Ionescu and Jesse de Pagter<\/strong><\/p>\n

Complement: EU automation policy: Towards ethical, human-centered, trustworthy robots? [pdf]<\/a><\/p>\n

1. Introduction<\/h2>\n

Recent debates in peer production research (Smith et al., 2020) point to the resurgence of an automation discourse that calls on information and computer technologies to boost the productivity and competitiveness of European production facilities. The latest version of this discourse, known as \u201cIndustry 4.0\u201d (Kagermann et al., 2013), envisions a distributed, interconnected, highly automated factory. Although the notion of a Fourth Industrial Revolution did not trigger significant changes in the traditionally conservative European manufacturing domain, the vision of a distributed, interconnected factory seems to have imperceptibly conquered other aspects of economic and social life, notably through the so-called \u201cgig economy\u201d (Woodcock & Graham, 2019) and a new wave of rationalization of the public and private sectors through digitization. In this context, peer production researchers have asked whether \u201ca socially-constructive approach to technology\u201d can provide a viable alternative to the \u201cdepopulated vision of Industry 4.0\u201d (Smith et al., 2020, p. 9), while noting that what Smith et al. (2020) termed \u201cpost-automation\u201d is already practiced in hackerspaces, makerspaces, fablabs and other public contexts. Among other things, the vision of post-automation emphasizes \u201cdemocratic deliberation over the technology itself\u201d and \u201c[s]eeing technology as productive commons\u201d (p. 9).<\/p>\n

Together with the policy piece complementary to this paper (De Pagter, 2022), we aim to further explore how the ideas behind such democratic deliberation over technology can be established within the current discussions and developments in robotics. We zoom in on a specific case study which arguably presents interesting and relevant material about participatory robotics research conducted in a makerspace. To be precise, the study focuses on a publicly funded research project that is organized around the idea of \u201cdemocratizing\u201d collaborative robots (or cobots). Cobots are a specific type of robots that have the potential to allow for much stronger human-robot collaborations than before. As part of the studied project, such collaborative industrial robots are being introduced to an Austrian high-tech makerspace. Thanks to its unprecedented safety features, cobot technology has the potential to provide an alternative to the \u201cdepopulated vision of Industry 4.0\u201d by promising humans permanent roles in factories as part of \u201chuman-robot teams\u201d. Furthermore, cobots reify the compelling vision of a \u201chuman-robot symbiosis\u201d (Wilkes et al., 1999), designed to fulfill the seemingly contradictory goals of increasing productivity and flexibility while keeping up worker and customer satisfaction. In this context, the policy piece then provides a short analysis of the European Union\u2019s policy-making efforts concerning the future of robotic automation in view of democratizing robotic technology. As the piece states, those policy-plans currently emphasize the notion of ethical, human-centred, and trustworthy robots. While critically engaging with such high-level principles, the policy piece argues for an approach that engages with democratic futures of robotics through notions of pluralist technological cultures in which speculative attempts towards democratic technologies and practices can be grounded.<\/p>\n

Apart from analyzing cobots as the technological artifacts that are central to the studied project, this paper regards the makerspace as being instrumental in the way the concept of democratization unfolds in the case study. We understand makerspaces as shared machine shops in which different digital, manufacturing, biological and other technologies can be used on premise by members of the public (Seravalli, 2012). While typically, makerspaces provide their members with access to additive and subtractive manufacturing technologies (e.g., 3D-printers, laser cutters, etc.), conventional and collaborative industrial robotic arms have only recently started to figure in such locales, with safety concerns and lacking expertise in robotics being important barriers to their appropriation by members of the makerspace communities. While not being a new kind of institution (Smith, 2014), makerspaces have gained popularity in the past 15 years by providing infrastructure and support to their members (Dickel et al., 2019) and other interested individuals and institutions to implement their ideas with the help of the latest manufacturing technologies. Studying the introduction of cobots in a makerspace arguably provides an opportunity to analyze post-automation as a democratic alternative to Industry 4.0 from a micro-perspective. The very location of the makerspace enables public access to and creative usages of the cobot. Our analysis is thereby focused on the production of knowledge and the problems entailed by the implicit notions of human-robot collaboration. Previous research emphasized the tensions and contradictions between the ideal of democratizing technology and \u201cneoliberal business-as-usual\u201d (Braybrooke & Smith, 2018). The aim of this paper is to go beyond the realization that democratic institutional models may be misappropriated by business interests. Instead, we are interested in whether and how different exchanges centered on technologies \u201cin the making\u201d are possible between individual and institutional actors even in \u201cmakerspaces defined by institutional encounters\u201d (Braybrooke & Smith, 2018). In this regard, drawing on theory from the fields of science and technology studies (STS) and organizational studies, we conceive of the sociotechnical configuration of the studied project as a trading zone (Galison, 1997; Collins & Evans, 2002).<\/p>\n

In the chapters that follow we will first provide a more detailed overview of our case study and methodology. Then, we explore the notion of trading zones in the context of technology democratization. The analysis draws on this nuanced trading zone concept in order to examine how the concept of democratization of technology facilitated the project as well as the subsequent encounters between different kinds of institutions that resulted from it. Based on the analysis of those encounters, we trace two transitions: the first one concerns the transformation of entrenched knowledge and practices pertaining to cobot safety as the technology is appropriated by members of the makerspace in question. The second one pertains to the reconfiguration of the makerspace determined by result-oriented collaborations with research institutes and companies, which entailed the projectification and professionalization of its activities. The conclusion discusses the role of the makerspace in the democratization of cobot technology.<\/p>\n

2. Case Study and Methodology<\/h2>\n

Collaborative industrial robots were initially introduced as devices capable of manipulating heavy work pieces in collaboration with human workers (Colgate et al., 1996). To ensure safety, these devices embodied the principle of passive mechanical support, which\u2014combined with traditional robotic technology\u2014brought about the now stabilized image of an industrial cobot as an anthropomorphic robotic arm endowed with strength and sensitivity. Much like personal computers, today, (personal) cobots can also be ordered, installed, and operated by anyone willing to invest in a handy universal helper. They come with downloadable apps and intuitive \u201czero-programming\u201d user interfaces. Recently, they also started to figure in makerspaces for various purposes. This prompts questions regarding the safety norms that apply in makerspaces, considering that all possible maker applications\u2014from idea to workspace layout and source code\u2014cannot be known in advance; and, more generally, it brings up questions regarding the required safety measures, when an industrial cobot is operated outside of a factory or a research lab. The certification of cobot applications for industrial use requires the manufacturing companies to present a certification consultant with all the details of an application, including the precise layout of the operational environment, the intended human-robot collaborative operation modes (ISO, 2016; Rosenstrauch & Kr\u00fcger, 2017), the specifications of the end effector and the objects being manipulated, and the application\u2019s source code (Michalos et al., 2015). Given these specifications, the consultant carries out physical measurements (forces, moments, angles of attack, etc.) based on a series of predefined hazard scenarios considered relevant for the respective application. If the residual risks (i.e., the risks that cannot be eliminated using technical means) of the application are acceptable with respect to the applicable norms, the consultant grants a safety certificate.<\/p>\n

The project that we analyze in this paper, is set out to address such issues through \u201cdemocratization\u201d operationalized by facilitating the crowdsourcing of cobot software and applications by members of a makerspace in cooperation with an interdisciplinary research team. The empirical data were collected using an ethnographic approach (Amann & Hirschauer 1997; LeCompte & Schensul, 1999; Van Maanen, 2011) based on participant observation and interviews with makerspace representatives and other members of the project team, consisting of robotics researchers and cobot safety experts, human-robot interaction (HRI) researchers, industrial robotics engineers and trainers, and makerspace employees (trainers and programmers). We observed two project phases, lasting for about two years, by participating in all project meetings as well as by observing project-related activities and conducting interviews in a makerspace and a factory training center. In the first phase of the project, we focused on the interests of the different partners, converging around the shared goal of \u201cdemocratizing cobot technology in makerspaces.\u201d Then, in the second phase, we observed how the safety issue was being dealt with by the different actors involved in the project and how the makerspace reconfigured its internal organizational structure to fulfill its duties in the project. During this time, the composition of the project team changed several times, especially on the side of the makerspace.<\/p>\n

3. Trading Zones, Makerspaces and the Democratization of Technology<\/h2>\n

The theory behind the analysis in the context of makerspaces is informed by the concept of the trading zone. The use of this concept implies a strong focus on the exchange of knowledge, expertise, and practices between members of different technical cultures. According to Collins et al. (2007), depending on the configuration of power relations between the participants to the trade, there can be different types of trading zones, which can transition one into the other. In the paragraphs below, we will first provide insights from the literature into the maker culture and its relation to the idea of democratizing technology. After this, we connect those insights to the concept of the trading zone.<\/p>\n

3.1 Makers and the democratization of technology<\/em><\/h3>\n

The connotations between the idea of technology democratization and the maker culture are already established. Bijker (1996) proposes the term \u201cdemocratization of technology\u201d, denoting a form of resistance against established institutions and regimes of knowledge production, in which experts play dominant roles in spite of their \u201cinterpretive flexibility\u201d (Bijker et al., 1987). In Bijker\u2019s view, participation and pluralism are determining features of democracy, which can also be transferred to technology in terms of development and use. Smith et al. (2020) note that, although the social construction of technology \u201cis nothing new\u201d, the recent discourses proclaiming a so-called \u201cFourth Industrial Revolution\u201d driven by \u201ccyber-physical production systems\u201d tend to forget about the historical role of the social component in production systems. What Smith et al. (2020) refer to as \u201cpost-automation\u201d seems to draw from Bijker\u2019s \u201cdemocratization of technology\u201d:<\/p>\n

\n

\u2026 post-automation is about the subversion of technologies that appear foundational to automation theory, and appropriating them for different social purposes, on less functionalist terms. Post-automation looks to a more open horizon based in democratic and sustainable relations with technology, and that thereby develops socially useful purposes in human-centred not human-excluding ways.\u201d (p. 9)<\/p>\n<\/blockquote>\n

Tanenbaum et al. (2013) suggest a clear link between the maker community and the democratization of technology, by arguing that \u201cDIY practice is a form of nonviolent resistance: a collection of personal revolts against the hegemonic structures of mass production in the industrialized world\u201d (p. 2609). As Tanenbaum et al. (2013) note, through this form of nonviolent, nonthreatening resistance, the members of the DIY community become themselves \u201cco-designers\u201d and \u201cco-engineers\u201d of technologies normally created by experts in laboratories, scientific institutions, and private organizations by first appropriating them and then contributing to their (further) development in a significant way.<\/p>\n

Some authors regard the maker movement as an effect of a so-called \u201cThird Industrial Revolution\u201d (Rifkin, 2011; Troxler, 2013), which is characterized by the shift from a hierarchical towards a so-called lateral distribution of power in the manufacturing domain. This shift is facilitated by state-of-the-art digital communication infrastructures and means of energy production. Fablabs (fabrication laboratories) in which members have access to the newest (often additive or subtractive) manufacturing technologies (e.g., 3D printers, laser cutters, CNC mills, etc.) helped to blur the \u201clabour-capital-divide\u201d and the \u201cwhite-collar-blue-collar-divide\u201d by facilitating the re-emergence of the \u201cowner-maker\u201d and of the \u201cdesigner-producer\u201d (Troxler, 2013).<\/p>\n

Recent work in peer production studies provides a more nuanced image of makerspaces and their communities. Not only are makerspaces a \u2018new old\u2019 kind of institution, having roots in the so-called Technology Networks, which emerged in Britain during the 1980s (Smith, 2014); but instead of embodying the idea of resistance against dominant institutions and regimes of technological innovation, some of them seem to be increasingly defined by \u2018institutional encounters\u2019 with organizations and regimes from which they wish to delimit themselves (Braybrooke and Smith, 2018). In the 1980s, technology networks were \u201ccommunity-based workshops [which] shared machine tools, access to technical advice, and prototyping services, and were open for anyone to develop socially useful products\u201d (Smith, 2014, p. 1). Some 20 years later, as Braybrooke & Smiths (2018) note,<\/p>\n

\n

[d]epending upon the specific institutional encounter, makerspaces are becoming cradles for entrepreneurship, innovators in education, nodes in open hardware networks, studios for digital artistry, ciphers for social change, prototyping shops for manufacturers, remanufacturing hubs in circular economies, twenty-first century libraries, emblematic anticipations of commons-based, peer-produced post-capitalism, workshops for hacking technology and its politics, laboratories for smart urbanism, galleries for hands-on explorations in material culture, and so on and so on \u2026 and not forgetting, of course, spaces for simply having fun.\u201d (p. 10)<\/p>\n<\/blockquote>\n

Whereas such interpretations provide an interesting look into the role of makerspaces as drivers of innovation, they also bring up the issue of how makerspaces can maintain their autonomy and preserve their role as a more democratic alternative to rule-based, normative institutional models in dealing with traditional organizations (Braybrooke & Smith, 2018). The question is therefore what the term \u201cdemocratic\u201d actually means in such encounters. The same authors conclude that \u201c[t]he social value in makerspaces lies in their articulation of institutional tensions through practical activity, and in some cases, critical reflexivity\u201d and therefore should not be devalued because they cannot \u201coverturn institutional logics all by themselves\u201d (p. 11).<\/p>\n

Other strands of research speak of the diversity of motivations and programs driving the members of the maker community around the world. Lindtner (2015) notes that in China, actors in makerspaces appear to have a political agenda when they engage in doing things in a country-specific way as an \u2018antiprogram\u2019 to widespread Western manufacturing technology and culture pervading Chinese factories. By contrast, Davies (2017) argues that in the United States, makers do not seem to have political motives or to compete with traditional manufacturing sites. Instead of subscribing to ideals of democratizing technology and manufacturing techniques, US makers and hackers seem more concerned with being part of a community for purposes of leisure and socialization.<\/p>\n

The growing encounters between members of traditional organizations, like companies or research institutes, and members of the maker community determined Dickel et al. (2014) to regard shared machine shops as \u2018real-world laboratories\u2019:<\/p>\n

\n

These spaces provide niches for experimental learning that expand the scope of established modes of research and development which are predominantly embedded in professional contexts of industry or science. As a specific property, [shared machine shops] have a capacity for inclusion because they provide infrastructures for novel forms of collaboration as well as self-selected participation of heterogeneous actors (in terms of expertise, disciplines, backgrounds etc.) who can join the related endeavours.<\/p>\n<\/blockquote>\n

As real-life laboratories, shared machine shops are places where people can join collaborative projects, they \u201cmight also be places of serendipity, where experts and professionals meet with hobby enthusiasts and DIY innovators and work together on new, unexpected projects\u201d (Dickel et al. 2014, p. 7). Yet, while terms like \u201creal-world laboratories\u201d shed light on the roles shared machine shops play in society, they do not say much about how and why exactly collaborations between different kinds of (un)certified experts come about, unfold, flourish, or fail. By framing such collaborations using actor-network theory, Wolf and Troxler (2015) investigate how knowledge is co-created and shared in open design communities. Other authors have pointed to the existence of a logic of exploitation (S\u00f6derberg & Maxigas 2014), which, for example, determines private companies to crowdsource software and hardware by drawing on the availability and expertise of so-called \u2018independent developers\u2019 (Drewlani & Seibt, 2018). Drewlani and Seibt (2018) note that the independent developer is a co-creation of the interested company and the co-interested developers (in an actor-oriented sense), who are enrolled in company-controlled development activities.<\/p>\n

3.2 Makerspace encounters and the trading zone concept<\/em><\/h3>\n

Complementing this body of work, our focus in this paper is on a rather specific kind of encounter\u2014between robotics and human-machine interaction researchers on one side, and members of the DIY community on the other. While peer production studies have looked at how additive and subtractive manufacturing technologies are being used, little attention has been given to the appropriation of assembly technologies, like industrial robots, which are essential in complex product manufacturing processes. Our perspective also departs from the dichotomous view of makerspaces and other institutions (companies, research institutes, government agencies, etc.) as being entirely distinct entities, which only interface through well-defined protocols of collaboration. Instead, we argue that the makerspace as a particular socio-technical configuration, facilitates the emergence of trading zones between the members of different types of (research) institutions. Makerspaces, including their members and \u201cmember-employees,\u201d are thus playing a dual role: of trading partners and facilitators of the trade, while having to fulfill the specific interests that accompany the respective roles.<\/p>\n

Galison (1997) coined the term trading zone, representing “[a] site\u2014partly symbolic and partly spatial\u2014at which the local coordination between beliefs and action takes place” (p. 784). Such coordination is made possible through interlanguages (trading languages, pidgins, creoles), which facilitate the communication between different epistemic subcultures (e.g., theoretical and experimental physics) sharing a common goal (e.g., to build a radar system). Drawing on Galison\u2019s work, Collins et al. (2007) note that there can be several types of trading zones, which do not necessarily build on interlanguages alone but also on what Collins & Evans (2002) call \u201cinteractional expertise\u201d\u2014expertise that, for example, is sufficient to interact with participants and carry out a sociological study; or boundary objects\u2014\u201cobjects which are both plastic enough to adapt to local needs and constraints of the several parties employing them, yet robust enough to maintain a common identity across sites\u201d (Star & Griesemer, 1989, p. 393). Schubert and Kolb (2020) provide an example of a trading zone between social scientists and information system designers, while emphasizing the importance of \u201csymmetry\u201d as “a mode of mutual engagement occurring in an interdisciplinary trading zone, where neither discipline is placed at the service of the other, and nor do disciplinary boundaries dissolve” (p. 1). The trading zone concept has also been applied to exchanges between non-scientific communities (Balducci & M\u00e4ntysalo eds., 2013; Gorman ed., 2010). In these examples, the different groups involved in the trade seem to have sufficient epistemic, political or other kinds of authority to act as approximately equal partners in the trade. The balance of power relations between trading partners determines whether a trading zone tends to be collaborative, coercive, or subversive (Galison, 2010; Collins et al., 2007).<\/p>\n

Collins et al. (2007) define trading zones \u201cas locations in which communities with a deep problem of communication manage to communicate\u201d (p. 658), while stressing that \u201cif there is no problem of communication there is simply \u2018trade\u2019, not a \u2018trading zone\u2019\u201d (p. 658). One type of trading zones identified by Collins et al. (2007), which do not rely on inter-languages, are the so-called \u201cboundary object trading zones, which are mediated by material culture largely in the absence of linguistic interchange\u201d (p. 660). Drawing on Collins et al. (2007), we would argue that in the studied project, the cobot performs like a multi-dimensional boundary object, both materially and conceptually. This performance stimulates and justifies exchanges between researchers and members of the makerspace community\u2014as two different \u201cuser-developer\u201d groups\u2014and between different kinds of institutions. As Galison (2010) notes, one way to determine whether a particular sociotechnical configuration may be conceived of as a trading zone is to look at what is being traded, by whom, and how power is distributed among the partners of the trade. To provide situated answers to these questions, we will turn to our account of the empirical material where the situated interactions between the different actors involved in the project facilitated the emergence of a trading zone through the shared goal of democratizing cobot technology.<\/p>\n

4. Trading robotics knowledge and practices<\/h2>\n

Drawing on the theoretical framework above, in the following we investigate the makerspace using the lens of the trading zone on three different observations. First, we analyze in general how collaborations between researchers and members of the DIY community are facilitated in makerspaces. After that, we look at a specific deliberation on cobot safety between the members of the project team, whereby we discuss how this notion is being construed and negotiated by the different individual and institutional actors involved in the studied project. Finally, we look at the way in which knowledge about cobot safety and applications is produced in a context previously unforeseen by the creators of the technology.<\/p>\n

4.1 Actors\u2019 Hopes and Expectations for the Project<\/em><\/h3>\n

During the project\u2019s preparatory phase, the representatives of the different institutions involved in the project\u2014a robotics research institute, a human-machine interaction group from a university, a factory training center, and a makerspace\u2014emphasized the need for democratizing cobot technology. For the HRI researchers, this implied that the cobot could be made more accessible to a wider range of potential users. These researchers believed that the technical and economic potential of cobots was curtailed by the strict safety norms and standards governing the industrial uses of the technology. In addition to the safety issue, some HRI researchers considered that strictly controlled factory and laboratory environments impose limits on the creativity of application developers. The researchers from the robotics institute on the other hand, were interested in meeting the \u201cmaker scene\u201d to gain access to a pool of potential contributors to open-source robotic software. In this sense, these researchers expressed their hopes concerning the organization of \u201chackatons\u201d and other development competitions. The roboticists were also interested in exploring new ways of ensuring human-cobot interaction safety, which would allow more flexibility and creativity than current industrial safety norms. The representatives of the makerspace, being the providers of such a maker scene, stressed that \u201cthis kind of project\u201d was exactly what they were looking for in order to develop their expertise in the domain of robotics. Besides providing its paying members with access to a wide range of industrial tools and machines, the makerspace is part of a holding company, which turned a former factory into a business hub, hosting co-working spaces, company offices, and the respective makerspace. With more than 500 members, many of whom are artists, students, and diverse professionals, one of the makerspace\u2019s roles in this configuration is to draw talent and expertise into its ecosystem. Finally, the representatives of the factory training center expressed their wish for the project to loosen cobot safety certification requirements and reduce costs. Also, they stressed the industry\u2019s need to increase the pool of skilled cobot programmers and operators. In this sense, the makerspace could serve as an additional robotics training site located in a city. The training center organizes a high number of courses with a diverse pool of participants, ranging from teenagers to unemployed persons seeking to engage in reskilling activities.<\/p>\n

The way different partners engage with the topic of democratization of technology is best understood in a very practical manner. The concept of democratization is as such a buzzword in the same way that many other buzzwords like \u201cethical robots\u201d, \u201cresponsible innovation\u201d, and \u201chuman-centred technology\u201d are. The role of democratization has not been very clear in this project. First and foremost, the notion of democratizing technology does not properly facilitate many explicit discussions on what it means to democratize cobot technology. Nevertheless, even though it is a buzzword that might be deployed in a rather superficial manner, a buzzword can have an effect in the sense that it can help to establish the trading zone that allows the different stakeholders of the project to have encounters in a much more profound manner (Bensaude Vincent, 2014; Ionescu, 2013). The implicit ideas on the concept of democratization have invited and facilitated the encounters between the different actors in the project. Within the context of those encounters, the different actors in the project were able to jointly transform their knowledge and practices towards enabling a different mode of conducting human-robot interaction research in an unconventional cobot usage context. Furthermore, this newly established infrastructure of inquiry also provided room for qualitative methods, such as participant observation, which contrasts the measurement-based assessment methods used in industrial safety certification practices.<\/p>\n

4.2 Negotiating Safety at the Boundary<\/em><\/h3>\n

For the HRI researchers on the project team, the implicit idea behind democratization was to design the architecture of the cobot system for the makerspace. This included defining the system\u2019s safety features. The goal was for makerspace members to interact with the cobot safely and directly without application-specific safety certifications or supervision by trainers. With these goals in mind, the project pushed to find a suitable technical solution that would work both for the factory\u2019s training center and the makerspace. Several face-to-face and online meetings were organized. In the following, we discuss some of the recurring themes observed during these meetings. The reporting technique used is that of short critical event vignettes based on field notes, which \u201cdepict scenes that were turning points in the researcher\u2019s understanding or that changed the direction of events in the field site\u201d (LeCompte and Schensul, 1999, p. 273).<\/p>\n<\/p>\n

<\/p>\n

\"\"<\/a>
\n

Figure 1: The axes (or joints) of a typical 6-degree of freedom robot.<\/strong><\/figcaption><\/figure>\n

Vignette 1 \u2013 Trading flexibility for safety.<\/em><\/strong> In one pivotal meeting, the participants discussed the problems faced by the factory when certifying new cobot applications. Certifications consist of scenario-based risk and hazard analyses conducted by external consultants. The meeting\u2019s focus was on the safety certification of cobot training as an application. As one trainer explained, a surprising result of the certification process was that the range of the end effector of the 6-axis robotic arm used was significantly limited in the horizontal plane in order to minimize the risk of head injuries:<\/p>\n

\n

The robot arm is not allowed to reach higher than a few centimeters above the table. [She indicates 10-15 cm with her hand.] This was not thoroughly thought through by the consulting company. One teaches a move at 140 mm and does not see anything.<\/p>\n<\/blockquote>\n

To enforce this constraint, the consulting company limited the range of the robot\u2019s \u201celbow,\u201d i.e., the third axis from the base (see Figure 1). This solution puzzled the safety experts from the robotics research institute, who did not understand the purpose of this measure, since \u201cthe third axis could be anywhere\u201d and might therefore still pose safety risks. Moreover, as one trainer pointed out, in productive applications, the third axis limitation did not exist because it would make the robot practically useless. As a result, whereas trainees would learn how to use the robot in a reduced functional mode, on the shop floor they would be exposed to another, less restrictive one. The trainers considered this discrepancy unacceptable.<\/p>\n

This vignette suggests that cobot safety is contingent on the context of use, which entails different trade-offs between safety and flexibility. In the context-dependent application of the norms by certification consultants, common-sense aspects seem to have less weight in the process than measurements and hazard scenarios\u2014something that was considered problematic by trainers. \u201cSafety\u201d thus seems to be flexibly interpretable by consultants and factory representatives. This insight is important because ensuring safety was one of the main goals of the project proposal. If the very notion of safety was flexibly interpretable, then safety as a goal could not easily be achieved. Through this kind of encounters it became clear how different interpretations of the cobot as well as its potential use were contingent on how safety was going to be negotiated between participants throughout the project.<\/p>\n

<\/p>\n

Vignette 2 \u2013 Deferring to training and experience.<\/em><\/strong> In a follow-up meeting, the researchers from the robotics institute laid out their plan for a so-called \u201ccobot safety concept\u201d for makerspaces and training centers. In these settings, additional cobot safety features could be incorporated or detached depending on the experience level of the respective user. There would be two working modes\u2014a safe one and a less restrictive, creative one, the latter of which could be used during training sessions. As one safety expert argued, deferring safety to training and experience was motivated by the principle that, \u201cwhenever one works with a hazardous technology, one necessarily needs training.\u201d This philosophy was in line with that of the makerspace representative participating to the meeting, who proposed that the safety concept should be accompanied by a \u201ctraining concept\u201d having several attainable levels:<\/p>\n

\n

When a person reaches a certain level, they get an OK from the experts. We need to think about a good training concept. [\u2026] After completing all training modules\u2014we\u2019ve done everything we can! Now you have to take care yourself.<\/p>\n<\/blockquote>\n

As the discussion continued, a more nuanced image of the safety and training concepts emerged. The training modules and levels would be tied to the safety features of the robot. After receiving a basic safety training, makerspace members would be allowed to use the cobot in a power and force-limited mode. An additional training module would qualify users to teach the robot and to have it pick and place 3D printed parts, which do not have sharp edges. A further module would grant the privilege of working with the robot at higher speeds and forces. On another note, one of the robotics experts explained that, in safety certifications, a difference between qualification and competence is made. To attest competence, the number of usage hours and completed projects or simulations, and other kinds of practical experiences with cobots may be taken into consideration. \u201cWhether or not these experiences are successful is not important,\u201d the expert concluded.<\/p>\n

This vignette shows how the notion of safety was developed through a new safety concept. It suggests that, in contexts where application-based certifications are not feasible, experts relate their perception of safety to user experience. Most project team members considered the training and experience of users mandatory, even in makerspaces since, in the case of an accident, authorities would scrutinize the ways in which these trainings have been organized and how the different levels of safety have been ensured in relation to the experience of users. As a result, the safest strategy for implementing cobot safety in the makerspace turned out to be similar to the one implemented by the factory\u2019s training center. During these meetings, the safety and training concepts appeared to take shape iteratively and collaboratively in a path-dependent way. The role of \u201cthe experts\u201d in defining the safety features of the cobots and the different levels and permission of the training programs was uncontested, which was an important part of the way in which encounters took place at that stage of the project.<\/p>\n

Vignette 3 \u2013 The responsible user.<\/strong><\/em> While the project team worked on defining a new safety and training concept, one member of the team found out that another makerspace had already permanently installed a cobot, which could be used by members at their own will. The safety concept used in that makerspace was based on common sense; or on \u201ca sane human mind,\u201d as one of its representatives explained. Members were only offered a basic safety training, which included indications about which objects the robot was not allowed to manipulate (e.g., knives and other sharp tools). The users were then considered responsible for their own safety as per the general terms of agreement stipulated in their membership contracts. The peer pressure generated by this unexpected turn of events produced a certain shift in power relations between the robotics safety experts and the HRI researchers on the team, the latter of whom were less concerned with the safety issue and more eager to install the cobot in the makerspace as soon as possible. The competing makerspace had brought about\u2014although indirectly\u2014a new image of cobot users as responsible and accountable individuals, capable of taking care of their own safety. Accountability was ensured by having all members sign a liability waiver\u2014a common practice among makerspaces.<\/p>\n

This episode illustrates how the responsible user model as an alternative concept challenged the need for an elaborate safety and training concept in the project. In response, the safety experts on the team provided a new set of options concerning the system architecture, which now included a \u201cgradual\u201d safety model. Also motivated by budgetary concerns, one variant of this model foresaw the acquisition of a basic cobot without any additional safety features, provided that it was operated very slowly, with hardcoded speed limits ensuring safety. As more experience would be gained by observing how members interacted with the cobot, the safety experts hoped that a better-informed decision could be made concerning the additional safety systems required. However, the makerspace was to \u201cseriously consider how much residual risk they were willing to accept,\u201d the roboticists insisted.<\/p>\n

The sequence of the three vignettes together illustrates how the members of different technical cultures jointly sketched out a new cobot safety concept for makerspaces\u2014a new context of use for this technology. In this process, the roboticists and industrial engineers set out from a common understanding based on the relevant industrial safety norms, with which they were familiar. Yet, as the discussion progressed, it became clear that the application of these norms in non-productive settings, such as the training center, led to impractical solutions, with which all actors were unsatisfied. Simultaneously, the image of the projected user (Akrich, 1992) started to shift across the three vignettes in fundamental ways.[1]<\/a> While in factories, safety norms configure human-robot interactions, some makerspaces trust their members to be responsible and accountable individuals, who can work with \u201chazardous technologies\u201d without extensive training. By configuring the responsibility of the projected user on a spectrum from total dependency on norms (i.e., factory workers for the safety of whom others are responsible) to autonomy (i.e., makers and hackers responsible for their own safety and accountable for any consequences), the actors seemed to have found a fruitful ground for negotiation, which allowed them to further transform their knowledge. The trainable, configurable, and responsible user played an essential role in articulating the link between the safety concept and the training concept. The flexible, module-based training concept proposed by the roboticists helped to relate the training methods and practices used in the factory\u2019s training center with those used in the makerspace.<\/p>\n

The empirical material also suggests that negotiating safety at the boundary between different institutions and technical cultures represents one of the practical means of democratic deliberation that were used during the project. Facilitated by the cobot as an articulated boundary object\u2014comprising a safety concept, a training concept, and the flexible interpretation of safety\u2014the members of the project team managed to transform their knowledge in ways that would not have been possible if they stayed within the frame of mind of their own institutions and technical cultures. This particular transition was made obvious when the responsible user model was blended with the safety measure of simply slowing down the robot to produce a viable preliminary cobot system that was \u201csafe enough\u201d for work in the project and the makerspace to continue.<\/p>\n

4.3 Let\u2019s Agree to Disagree<\/em><\/h3>\n

In addition to participant observation, we conducted interviews with the trainers from the factory training center and those from the makerspace. Concerning the safety issue, the questions clustered around the topics of how training courses are organized, how safety is being addressed in the training sessions, and what potential hazards the trainers see in the trainees\u2019 interactions with cobots. The following interview extracts illustrate several of the differences in the language and practices used by researchers and industrial engineers on the one side, and cobot trainers from the makerspace on the other. For example, to the question of what could potentially go wrong during training, one trainer from the factory training center responded:<\/p>\n

\n

So, there can be unforeseen movements; therefore, each step [of the robot] towards the next waypoint is being tested before the entire program runs because there are some [robot] motions, where\u2014and this happened a few times during workshops\u2014the robot chooses a completely different way as one would expect.<\/p>\n<\/blockquote>\n

The problem of unforeseen motion paths is pointed out by the trainer as one potential safety issues occurring during workshops. Unforeseen pathways between predefined waypoints occur because the so-called inverse kinematics algorithms, which compute the six robot joint angles for any given target pose and then rotate the elements of the robot arm until that target pose is reached. This issue is treated differently by the makerspace trainers. The following discussion illustrates this contrast:<\/p>\n

\n

Interviewer:<\/em> So with MoveJ [joint based movement, as explained before] or which kind of movement?<\/p>\n

Trainer<\/em>: Definitely not linear. I think it must have been MoveJ. For sure.<\/p>\n

Interviewer:<\/em> And this happens also when you are running the simulation on the teach pendant first, or\u2026<\/p>\n

Trainer<\/em>: This, funnily, we don\u2019t do.<\/p>\n

Interviewer:<\/em> Ok.<\/p>\n

Trainer<\/em>: We do not watch the simulation [of the movement].<\/p>\n

Interviewer:<\/em> \u2026 pause \u2026 Me neither!<\/p>\n

Trainer<\/em>: Ok. [Laughing together]<\/p>\n<\/blockquote>\n

One way to avoid unexpected robot pathways is to first simulate them using the robot\u2019s software. Yet, in the makerspace using the simulation to preview the robot\u2019s movements during trainings does not appear to be a common practice. Although intrigued by this answer, we tried to avoid being normative by admitting that simulation is not always necessary. The next excerpt illustrates how unforeseen movements are being perceived by trainer and trainees and how the latter go about explaining what happens behind the scenes:<\/p>\n

\n

Interviewer:<\/em> Is this [the unexpected movement] something that disturbs the workshops or is it funny when it happens? What is the effect?<\/p>\n

Trainer:<\/em> The effect is mostly \u201coh, I did not reckon with that.\u201d [The trainees] are surprised but they are not scared. [\u2026]<\/p>\n

Interviewer:<\/em> And if they ask \u201ewhat just happened?\u201d\u2014How do you explain that?<\/p>\n

Trainer:<\/em> We try to find out together, to remove [the problem]. I think I never spoke in a workshop about inverse kinematics. I try to avoid that because, to be honest, I am not knowledgeable enough myself to properly explain that. What I do explain is that, during the different movements\u2014that is, MoveJ\u2014the robot uses the axes in such a way that it is most effect for itself. And in the case of a linear movement, it goes from point to point in a line, which is not the case with MoveJ. So this is my explanation for the two movement types.<\/p>\n<\/blockquote>\n

In this excerpt, the makerspace trainer seems to argue in favor of providing non-expert explanations for unexpected robot movements, while invoking her lack of knowledge concerning inverse kinematics. In these situations, the trainer seeks ways to legitimize simplified explanations over robotics terminology in an attempt to distill the necessary practical knowledge from a theory that is inaccessible to non-experts. Nevertheless, the trainer adopts the technical term \u2018MoveJ\u2019, which we had dropped earlier in the discussion, thus showing how the language exchange takes place. To the question of whether one would gain something by talking about inverse kinematics during the workshops, the makerspace trainer responded:<\/p>\n

\n

I don\u2019t think so. I think, to be honest, that inverse kinematics is only relevant when one is really interested in robotics, that is, when one wants to go deeper. But for programming, if one is taught how\u2014and this pertains to intuition\u2014to learn something not through reading but through \u201cdoing\u201d and to understand and for that there are possibilities in the makerspace; and everyone who works with [the robot] knows it that every step that I program must be tried out and not with 100% speed. And I think that this way, one gets a tremendous feeling about what is possible and what is not.<\/p>\n<\/blockquote>\n

The trainer stresses that speaking about this problem using robotics terminology is neither necessary nor desirable. Then, she goes on and sketches the profile of a projected cobot user, who is likely to be encountered in the makerspace. The responsible cobot user is as such becoming defined as a pragmatic individual, who is well-aware of the safety hazards entailed by working with the cobot while nonetheless being more interested in programming the cobot than in learning about its internals. This type of user is expected to benefit from the resources of the makerspace (interested peers, more experienced trainers, other workshops, etc.) to learn about safety and other issues. The makerspace is thereby stipulated as a source of practical knowledge and possibilities\u2014one only needs to ask and come up with ideas, whereby the role of reading is superseded by that of \u201cdoing.\u201d Together with the image of the pragmatic, responsible cobot user, downplaying the importance of robotics theory and terminology may be regarded as an act of resistance to well-established forms of knowledge, learning, and acting that are characteristic for traditional institutions; a resistance articulated around knowledge gained through and (re-)invested in practice rather than theory; and a form of resistance through doing that keeps the power relations between makers and researchers balanced and the principle of symmetry upright. In the next chapter we will explicate however that the makerspace in question itself becomes reconfigured throughout the course of the project.<\/p>\n

5. Reconfiguration<\/h2>\n

In the second part of our analysis we trace how the studied makerspace reconfigured under the influence of other institutional models. By facilitating a trading zone in which the makerspace becomes accessible to research institutes and companies, its activities are increasingly projectified and a process of professionalization is being pursued. We analyze this by going through three different roles of the makerspace, namely: the makerspace as the provider of infrastructures behind this trade, the makerspace as a party with specific interests in the project leading to particular expectations concerning the project\u2019s outcome, and finally the makerspace as a trading partner in exchanges that are supposed to help in the co-construction of cobot technology.<\/p>\n

5.1 Infrastructures and the trading of epistemic goods<\/em><\/h3>\n

Within the sociotechnical configuration of the studied project, the notion of democratization of technology allowed for pivotal agreements on an institutional level, between the different types of organizations involved in the project. By offering the infrastructure and a pool of participants, the makerspace received expertise in the domain of robotics and HRI from two research institutes and an industrial training center. The training center exchanged their safety requirements and expertise gained through expensive certification processes for a new safety concept and the promise of access to a new training infrastructure. The makerspace thus positioned itself as a facilitator of the trade between different kinds of institutions, while taking part in the trade itself as well. Within this context of facilitation, the main form of exchange occurred at the conceptual level between the researchers and engineers involved in the project. United by the common goal of democratizing cobot technology for different purposes, they transformed their knowledge about this kind of industrial robot. When used in an industrial context, the cobot seemed to convey the image of a pre-programmed machine, which configures worker routines. By contrast, in the makerspace it was perceived more like a computer that moves, which allows its users to explore its capabilities in a playful manner. The news about another makerspace having already permanently installed a cobot affected the project team\u2019s preliminary safety concept, which was built upon an elaborate training program combined with very specific active safety systems. These \u201cepistemic moments\u201d prompted an inversion in the perception of the user-cobot relation, with users becoming interacting subjects rather than objects of inquiry for researchers and engineers.<\/p>\n

It is through these moments that a form of epistemic trade appears more evidently to be at work. The goods of this trade are epistemic in nature because they challenge the entrenched beliefs, practices, and knowledge of the participants in unexpected ways. The \u201cepistemic goods\u201d being traded between the members of different technical cultures seem to be of little value within one\u2019s own culture\u2014perhaps because they are considered less important than other objects and thus play secondary roles in knowledge production processes. In this regard, the present case study suggests that, to foster sustainable exchanges between researchers and makerspace members as well as between members possessing different kinds and levels of expertise, trade must be fair in the sense that the contributions of all the parties involved should be balanced and equitable. Mutual respect for each other\u2019s expertise is required of both researchers and participants. Through mutual respect, hierarchical boundaries induced by the members\u2019 diverse educational backgrounds can be blurred. And, the benefits of the trade must be shared one way or another with the members of the other culture, to whom the receiving parties remain indebted.<\/p>\n

5.2 Interests and the alignment of expectations<\/em><\/h3>\n

Whereas the makerspace can be seen as an important facilitator of the trading zone as is argued above, balancing power and negotiating different interests among the actors involved in the trade was a fragile endeavor. Furthermore, a difficult financial situation urged the makerspace to seek more aggressive ways to capitalize on the outcomes of the project. Crucial here is the fact that the makerspace in question is also an institution which, shortly after its inauguration, has provided creative refuge primarily to the members of the local community of artists and to members of the public. Roughly three years later however, it is seeking to professionalize its staff and organizational culture. This is perhaps also an effect of the Covid-crisis, which made high-tech makerspaces even more dependent on public funding than before. At the same time, some of the early members of the makerspace saw an opportunity to capitalize on their own creativity. Especially when public funding is used, there is hope for another mode of working, in which people get paid for their creative work without having to protect and commercialize using instruments like patents and startups. This reconfiguration led to misalignments of expectations, thereby delegitimizing gig-based trade between the makerspace and its members. The following episode provides an example of misalignment, reflecting the makerspace\u2019s reconfiguration of attitude towards is members-employees.<\/p>\n

The makerspace contracted one of its members to find novel potential collaborative robot applications. The member\u2019s freedom was thus constrained to some extent by the requirement to produce a result in line with the goals of the project. At the same time, there were no restrictions as to how, when, and where such explorations should happen. After some time, the assessment of this \u201cuncertified expert\u201d was that, with the exception of a stop-motion application (i.e., using the robot to film or photograph plants and other (living) things over long periods), cobots could only do what other specialized machines, like 3D printers or circuit board assembly machines, already did better. Instead of a collection of applications, the programmer presented a new robot software, which he created outside the allotted contingent of hours for which he was being paid. The idea, so he told us, came during a long train ride. He integrated an inverse kinematics library for the UR5 cobot into an existing 3D simulation and programming environment, in which different kinds of curves could be drawn by a user and followed by the robot, both in the simulation and reality. Whereas the idea was not entirely new, the way in which it was implemented was highly interesting and innovative. Some of the researchers involved in the project regarded this outcome as a fulfillment of the hopes and expectations with respect to open innovation in makerspaces. The programmer\u2014one of the makerspace\u2019s \u201chackers\u201d\u2014proved that, within a few weeks, a single creative and skilled individual can achieve results that other institutions would pursue using much greater investments and bureaucratic overhead. Yet, to the disenchantment of the entire project team, the hacker was not willing to publish the source code without what he considered a fair remuneration for his efforts. This was frustrating for some members of the project team, since the exchange of ideas that preceded the development of this software also contributed to its design.<\/p>\n

5.3 Exchanges and the co-construction of technology<\/em><\/h3>\n

The abovementioned reconfiguration of the makerspace also signals another shift: the makerspace appears to have transitioned from one trading mode to another. In that regard, the trading zone model complements that of makerspaces as \u201creal-life laboratories\u201d (Dickel et al., 2014) by suggesting that the collaboration between techno-scientists and lay or \u201cuncertified experts\u201d (Collins & Evans, 2002), and between research institutes and makerspaces are inherent to the co-construction of technologies and their users (Oudshoorn & Pinch, 2003) as a process of transition towards a potentially sustainable mode of knowledge production rather than a controlled experiment. The \u201cproblem of communication,\u201d which Collins et al. (2007) require to justify the use of the trading zone concept lies in the distinct languages used by the researchers and the makerspace members. This problem was overcome when a trading language emerged, which facilitated the communication between makerspace members and researchers, while allowing the members of each culture to pursue their own interests. The resulting sociotechnical configuration of the project resembled a collaborative, heterogeneous trading zone, which produced a new image of the responsible cobot user as well as a novel robotics software environment that fit the spirit and the various interests of makerspace members. As discussed in the previous section, the makerspace first contracted an existing member, who was skilled in programming, to experiment with the cobot and thus finding novel potential collaborative robot applications. However, after a period of evaluation and negotiation, the makerspace management decided not to acquire the rights for the newly developed tool, to terminate the contract with this programmer, and to hire a trained roboticist to work on the project.<\/p>\n

By hiring a professional robotics engineer to conduct trainings and foster the creation of new cobot applications by regular members, the trading language also disappeared as the researchers suddenly found themselves on the same page with the makerspace representatives. That is to say, the makerspace seized the opportunity to absorb some of the expertise of the researchers from the robotics institute by creating the premises for transitioning towards a homogeneous trading zone. In exchange, the robotics institute gained access to a new sociotechnical infrastructure, with the help of which entrenched industrial safety norms and standards could eventually be rendered more permissive or fulfilled in other ways. This suggests that introducing a new technology in makerspaces may cause some degree of institutional isomorphism (DiMaggio & Powell, 1983) through professionalization and normativity. Whereas in the beginning, the project attracted the members of very different technical cultures, thus facilitating the emergence of a heterogeneous, fractioned trading zone around the cobot as a boundary object, almost two years later, a shift toward what Collins et al. (2007) call a \u201chomogeneous\u201d trading zone could be observed, in which the trading partners shared a robotics interlanguage that included notions that were specific to the use of robots in makerspaces; such as, \u2018member applications\u2019 in addition to productive applications, \u2018responsible users\u2019 and \u2018safety concept\u2019 in addition to certified human-robot collaboration \/ coexistence \/ interaction, \u2018flexibilities\u2019 instead of restrictions, etc.). With professionalization and the introduction of new hierarchical levels and norms of conduct, the makerspace is therefore starting to position itself in Austria as a professional institution, which can quickly adapt to new research topics and forms.<\/p>\n

6. Conclusion<\/h2>\n

Through our analysis of a project focused on the democratization of an industrial technology, we have traced how collaborative robots have crossed the boundary of the factory into the open world. While being anticipated in European policy-making, such crossings play out in unexpected ways. One of the observable effects of the installation of cobots in the studied makerspace was that these robots accelerated the transformation of the makerspace into a technological platform for companies. This reconfiguration was rendered possible through professionalization and business-orientation. This underlines the speculative dimension of robotics and automation technologies, which bear many future unknowns. Moreover and unsurprisingly, there are also many fears and expectations around the future of automation. In this context it is crucial that notions such as that of post-automation that stress the need for and chart the potential of more democratic futures around automation are taken seriously. It is equally important for the promoters of democratic alternatives to the Industry 4.0 vision to gain access to infrastructures that make it possible for them to engage with novel technologies and the sociotechnical futures associated with them. The peer production community in general, and the makerspace model in particular bear great potential when it comes to the (further) development of such infrastructures. On the other hand, however, as the paper has argued, such infrastructures reconfigure in unexpected ways. Analyzing whether such reconfigurations have indeed led to some form of democratization remains an open question. However, something that has been achieved already is the very use and engagement with such concepts, albeit mostly implicit.<\/p>\n

When it comes to the analysis of the encounters that are facilitated in the makerspace, our observations suggest that the exchanges and interactions between the members of different technical cultures produced new insights in terms of cobot safety and human-robot interactions. Makerspaces can play multiple institutional roles at once, for example, by being open to the wider public while hosting technologies and members with a level of expertise comparable to that of \u2018certified\u2019 institutions and experts. Such relationships between makerspaces and other institutions thus appear to be interwoven to such an extent, that a clear distinction between them is no longer possible beyond the observation that makerspaces are, in principle, open to interested members of the public, while other institutions generally are not. When crossing the boundary between industrial research settings and makerspaces, relatively stable models are transformed more profoundly than through exchanges between researchers and engineers alone. In this context, the trading zone model provided an analytical tool for tracing encounters, exchanges, and transformations of and between the members of different technical cultures. This model complements that of real-world laboratories by emphasizing interaction modes and processes rather than the actors’ intentions.<\/p>\n

Acknowledgements<\/h2>\n

We would like to thank the two anonymous reviewers for their insightful comments and feedback. A special thanks goes to the editors of this special issue, Mathieu O\u2019Neil and Panayotis Antoniadis, for their prodigious editorial efforts and support. This research has received funding from the Austrian Research Promotion Agency through the \u201cCobot Meets Makerspace\u201d (CoMeMak) project number 871459.<\/p>\n

End notes<\/h2>\n

[1]<\/a> Akrich coined the term \u201cprojected users\u201d (as opposed to the actual users of a technology) referring to those user images (or profiles) for which inventors and designers conceive technologies. <\/p>\n

References<\/h2>\n

Akrich, M., 1992. The de-scription of technical objects. In Wiebe E. Bijker and John Law (eds.). Shaping technology\/building society. MIT Press, Cambridge, MA, 205-240. <\/p>\n

Amann, K. and Hirschauer, S., 1997. Die Befremdung der eigenen Kultur. Ein Programm. Die Befremdung der eigenen Kultur. Zur ethnographischen Herausforderung soziologischer Empirie. Suhrkamp, Frankfurt am Main. <\/p>\n

Balducci, A. and M\u00e4ntysalo, R. (eds.), 2013. Urban planning as a trading zone. Springer, Dordrecht, NL. <\/p>\n

Bensaude Vincent, B., 2014. ‘The Politics of Buzzwords at the Interface of Technoscience, Market and Society: The Case of \u2018Public Engagement in Science\u2019’, Public Understanding of Science, 23(3), pp.238\u201353. <\/p>\n

Bijker, W.E., 1996. Democratization of technology, who are the experts. Retrieved November, 3, p.2009. Available at: http:\/\/www.angelfire.com\/la\/esst\/bijker.html <\/p>\n

Bijker, W.E., Hughes, T.P. and Pinch, T.J. eds., 1987. The social construction of technological systems: New directions in the sociology and history of technology. MIT press. <\/p>\n

Braybrooke, K. and Smith, A. 2018. ‘Liberatory Technologies for Whom? Exploring a New Generation of Makerspaces Defined by Institutional Encounters. Journal of Peer Production 12, 3 (2018). <\/p>\n

Colgate, J.E., Edward, J., Peshkin, M.A. and Wannasuphoprasit, W., 1996. Cobots: Robots for collaboration with human operators. Proceedings of the ASME Dynamic Systems and Control Division, DSC-Vol. 58 (1996), 433\u2013440. <\/p>\n

Collins, H., Evans, R. and Gorman, M., 2007. Trading zones and interactional expertise. Studies in History and Philosophy of Science Part A, 38(4), pp.657-666. <\/p>\n

Collins, H.M. and Evans, R., 2002. The third wave of science studies: Studies of expertise and experience. Social studies of science, 32(2), pp.235-296. <\/p>\n

Corbin, J. and Strauss, A., 2014. Basics of qualitative research: Techniques and procedures for developing grounded theory (3rd ed). SAGE Publications, Los Angeles. <\/p>\n

Davies, S., 2017. ‘Characterizing Hacking: Mundane Engagement in US Hacker and Makerspaces’. Science, Technology, & Human Values, 43(2), 171-197. <\/p>\n

Dickel, S., Ferdinand, J.P. and Petschow, U., 2014. Shared machine shops as real-life laboratories. Journal of Peer Production 5, (2014), 1-9. <\/p>\n

Dickel, S., Schneider, C., Thiem, C., and Wenten, K. A. 2019. ‘Engineering Publics: The Different Modes of Civic Technoscience.’ Science & Technology Studies 32, 4, 8-23. <\/p>\n

DiMaggio, P.J. and Powell, W.W., 1983. The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American sociological review, pp.147-160. <\/p>\n

Drewlani, T. and Seibt, D., 2018. Configuring the Independent Developer. Journal of Peer Production 12, (2018), 96-114. <\/p>\n

Galison, P., 1997. Image and logic: A material culture of microphysics. University of Chicago Press, Chicago. <\/p>\n

Galison, P., 2010. Trading with the enemy. In Gorman ME (ed.) Trading Zones and Interactional Expertise. Creating New Kinds of Collaboration. MIT Press, Cambridge, MA. <\/p>\n

Gorman, M.E. (ed.), (2010). Trading zones and interactional expertise: Creating new kinds of collaboration. MIT Press, Cambridge, MA. <\/p>\n

Kagermann, H., Johannes H., Hellinger, A. and Wahlster, W. (eds.), 2013. Umsetzungsempfehlungen f\u00fcr das Zukunftsprojekt Industrie 4.0: Deutschlands Zukunft als Produktionsstandort sichern; Abschlussbericht des Arbeitskreises Industrie 4.0, Forschungsunion, Frankfurt a. M. <\/p>\n

ISO\/TS 15066:2016 \u2013 Robots and robotic devices \u2013 Collaborative robots. ISO International Organization for Standardization. P.O. Box 56 \u2013 CH-1211 Geneva 20 \u2013 Switzerland. <\/p>\n

Lindtner, S., 2015. ‘Hacking with Chinese Characteristics: The Promises of the Maker Movement against China\u2019s Manufacturing Culture’. Science, Technology, & Human Values, 40(5), 854-879. <\/p>\n

LeCompte, M.D. and Schensul, J.J., 1999. Analyzing & interpreting ethnographic data. AltaMira Press, Lanham, MD. <\/p>\n

Michalos, G., Makris, S., Tsarouchi, P., Guasch, T., Kontovrakis, D. and Chryssolouris, G., 2015. Design considerations for safe human-robot collaborative workplaces. Procedia CIrP, 37, pp. 248-253. <\/p>\n

Oudshoorn, N. and Pinch, T. (eds.), 2003. How users matter: The co-construction of users and technologies. MIT press, Cambridge, MA. <\/p>\n

Rifkin, J., 2011. The Third Industrial Revolution. How Lateral Power is Transforming Energy, the Economy, and the World. New York, Palgrave Macmillan. <\/p>\n

Rosenstrauch, M.J. and Kr\u00fcger, J., 2017. Safe human-robot-collaboration-introduction and experiment using ISO\/TS 15066. In 3rd International Conference on Control, Automation and Robotics (ICCAR). IEEE Press, Piscataway, NJ, USA, 740-744. <\/p>\n

Schubert, C. and Kolb, A., 2020. Designing Technology, Developing Theory: Toward a Symmetrical Approach. Science, Technology, & Human Values. https:\/\/doi.org\/10.1177\/0162243920941581<\/a> <\/p>\n

Seravalli, A., 2012. Infrastructuring for opening production, from participatory design to participatory making?. In Proceedings of the 12th Participatory Design Conference: Exploratory Papers, Workshop Descriptions, Industry Cases-Volume 2 (pp. 53-56). <\/p>\n

Smith, A. 2014, \u2018Technology networks for socially useful production,\u2019 Journal of Peer Production 5. <\/p>\n

Smith, A., Fressoli, M., Galdos Frisancho, M. and Moreira, A., 2020. Post-automation: Report from an international workshop. University of Sussex. <\/p>\n

S\u00f6derberg, J. & Maxigas (ed.) 2014. Book of Peer Production. Aarhus, Aarhus Universitetsf\u00f6rlag A\/S. <\/p>\n

Star, S.L. and Griesemer, J.R., 1989. Institutional ecology,translations’ and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social studies of science, 19(3), pp.387-420. <\/p>\n

Str\u00fcbing, J., 2014. Grounded Theory: zur sozialtheoretischen und epistemologischen Fundierung eines pragmatistischen Forschungsstils (3. Auflage). Springer VS, Wiesbaden. <\/p>\n

Tanenbaum, J.G., Williams, A.M., Desjardins, A. and Tanenbaum, K., 2013, April. Democratizing technology: pleasure, utility and expressiveness in DIY and maker practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2603-2612). <\/p>\n

Troxler, P., 2013. Making the 3rd industrial revolution. FabLabs: Of machines, makers and inventors, Transcript Publishers, Bielefeld. <\/p>\n

Van Maanen, J., 2011. Tales of the Field: On Writing Ethnography. 2nd ed. Chicago, University of Chicago Press. <\/p>\n

Wilkes, D.M., Alford, A., Cambron, M.E., Rogers, T.E., Peters, R.A. and Kawamura, K., 1999. Designing for human-robot symbiosis. Industrial Robot 26, pp. 49\u201358. <\/p>\n

Wolf, P., & Troxler, P., 2015. ‘Look Who’s Acting! Applying Actor Network Theory for Studying Knowledge Sharing in a Co-Design Project’. International Journal of Actor-Network Theory and Technological Innovation, 7(3), 15-33. <\/p>\n

Woodcock, J. and Graham, M., 2019. The gig economy: a critical introduction. Polity. <\/p>\n

Woolgar, S., 1990. Configuring the user: the case of usability trials. The Sociological Review 38, 1_suppl (1990), pp. 58-99.<\/p>\n\n","protected":false},"excerpt":{"rendered":"

Tudor B. Ionescu and Jesse de Pagter Complement: EU automation policy: Towards ethical, human-centered, trustworthy robots? [pdf] 1. Introduction Recent debates in peer production research (Smith et al., 2020) point to the resurgence of an automation discourse that calls on information and computer technologies to boost the productivity and competitiveness<\/p>\n

Read more<\/a><\/p>\n","protected":false},"author":22,"featured_media":0,"parent":8978,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"template_full_width.php","meta":[],"tags":[],"_links":{"self":[{"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/pages\/8925"}],"collection":[{"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/users\/22"}],"replies":[{"embeddable":true,"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/comments?post=8925"}],"version-history":[{"count":22,"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/pages\/8925\/revisions"}],"predecessor-version":[{"id":10194,"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/pages\/8925\/revisions\/10194"}],"up":[{"embeddable":true,"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/pages\/8978"}],"wp:attachment":[{"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/media?parent=8925"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/peerproduction.net\/editsuite\/wp-json\/wp\/v2\/tags?post=8925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}