AI and the challenges to intellectual property
Issues and risks presented by AI and related technologies, particularly concerning intellectual property, data privacy and security risks.
The impact of AI on Architectural Technology
Architectural Technologists and other architectural and design professions have not been immune to fears that artificial intelligence (AI) will take jobs and make existing roles redundant. Recently, The Guardian posed the question: 'Will AI wipe out architects?' Similarly, The Financial Times earlier this year ominously declared that 'AI is coming for architecture'. It is important to remember that most professions have been warned over the last 12 months that AI is going to steal their jobs, so in this respect, architecture and building design are not unique. Despite this, we have already seen how AI can significantly enhance these practices by creating helpful tools and efficiencies.
As lawyers, we have identified several issues and risks presented by AI and related technologies that those in the building design sector must be aware of, particularly concerning intellectual property, data privacy and security risks.
Adopting AI in architectural practices
Architectural Technologists are utilising AI in several ways, the first of which being virtual reality (VR), which is already heavily used by many practices embracing the opportunity to explore created 3D environments.
For our clients who are general practice or small commercial architects and designers, VR is used more frequently than AI, although we are observing an increasing number using platforms such as ChatGPT. The purposes are different though: VR aids the design process, whereas a language model like ChatGPT is used to save time during day-to-day tasks, such as providing a starting point to create project descriptions, blog articles or press releases.
As with all tools, though, the output is only as good as the input. Understanding how they work and how to craft effective prompts helps ensure language tools produce content with the appropriate level of detail and relevance. For this reason, tools like ChatGPT, at least for now, do not have the capability to make jobs redundant. They require an expert to craft the prompts and oversee the output to ensure it is not just technically accurate but also aligns with the company’s brand and approach. Only with this prior knowledge and expertise are AI programmes like ChatGPT able to generate time and cost savings.
Another form of AI gaining traction in the industry is the text-to-image generator; programmes such as DALL-E, Midjourney, and Stable Diffusion. Like ChatGPT, these programmes take prompts to search the internet for elements of images, rather than text, to create the requested image. This can significantly save Architectural Technologists time, as any follow-up changes can be made simply by prompting the AI again – no arduous Photoshopping required.
Intellectual property concerns and legal landscape
While many of these AI programmes represent significant drivers of efficiency for Architectural Technologists, there are numerous legal issues to be aware of, particularly in relation to intellectual property (IP).
For an Architectural Technologist, these issues should be considered from the perspective of both IP creation and IP protection. An IP creation perspective means avoiding infringement of a third party’s IP through use of AI generated output in your business. An IP protection perspective means avoiding third parties infringing your own IP through them using your work to train their AI models.
While there are a range of different types of IP, for the purposes of this article we will focus on copyright and design rights, as they are the most relevant IP rights in architectural and design professions.
From an IP creation perspective, can an AI model be considered the creator of IP, instead of the designer? Under English law, the main copyright legislation is the Copyright, Designs and Patents Act 1988 (CDPA 1988). In some senses, copyright protection is straightforward to obtain, as there is a relatively low threshold to its creation and there is no need for formal registration under English law. But when AI is involved, the position is less straightforward.
To qualify for copyright protection, creative works must first be ‘original’, which traditionally meant that the work had to be created through the author’s own skill, judgment and individual effort. As of more recently, ‘original’ is defined as the “author’s own intellectual creation”.
Is a work created by AI its own intellectual creation? We do not currently have a definitive answer from the courts. In our view, it is unlikely that copyright would exist in a work created by AI, because the AI itself cannot independently create the work; it needs to be fed information that it uses to create the work.
Even if copyright is found to exist, there is then a separate question of who owns the copyright. The CDPA 1988 says that the first owner will be the author (this is true for copyright and designs). The ‘author’ is defined as the person who creates the work. As with other intellectual property rights, where work is created during the author’s employment, the employer will own the copyright. However, if a work created by AI meets the originality requirement (which, as discussed above, we believe is unlikely) who would the author be?
Unlike many jurisdictions, the CDPA 1988 expressly provides protection to works that are completely computer-generated. For those works, “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”. Separately, the CDPA 1988 defines computer-generated to mean works generated by a computer where there is no human author. If the legislation means that the person behind the development of the AI will be the author, then how much intellectual input did that person have in the individual output? Was there any ‘intellectual creation’ from this person?
It would be a major leap to argue the AI developer’s role was significant in the creation of each individual output. It is likely that the legislation means the person who inputted the instructions to the AI will be the author, but at the time of writing this has not been confirmed by any court.
The UK government, following consultation in recent years, has accepted that AI systems should not be considered the owner of a design. Until this is tested in the courts, though, we cannot be certain that it is the current law.
Moving on to the IP protection angle, understanding who owns AI-generated work and how AI trains itself with existing material is important. Very few of these issues have yet been considered in detail by the courts. One live example involves Getty Images, which is suing Stability AI (the developer of the image generator Stable Diffusion), in the UK and the US, for copyright infringement, pointing to the claimed reproduction of the Getty Images watermark as evidence of the infringement. Similarly, at the end of last year, George R. R. Martin, John Grisham and fifteen other authors filed a US lawsuit against OpenAI (the developer of ChatGPT) for copyright infringement and an alleged “systematic theft on a mass scale” in relation to the data they used to train ChatGPT.
If an author can prove their copyright work has been replicated, it gives rise to a claim of copyright infringement. As mentioned above, this could feasibly include use of copyright work (such as images) in training an AI model. While we await the outcome of the above cases, in practice, given the opaque workings of many AI models and the reluctance of AI providers to show the training methods used, it is likely to be incredibly difficult for all but the most blatant examples to prove the copying of an Architectural Technologist’s IP.
Looking ahead
While the law is in a state of flux, it is nevertheless prudent for those in the built environment sector to be mindful about using AI to create designs. As the above court cases demonstrate, there are unresolved issues around how AI uses material to train its programmes. This creates a twofold problem for designers, as it is very difficult to peer behind the curtain and understand how an AI model is trained. How can we know whether it is using copyright material in an infringing manner? Your IP could be being infringed in training an AI model, allowing others to benefit from your work, whilst also potentially rendering any output derived from the use of the AI model infringing.
Moral rights also factor into this debate, particularly for an architect or Architectural Technologist, because if they are the author of the work, then they own copyright in buildings constructed to their design, and they have the right to be identified as the ‘author’ of the copyright in the building. This usually takes the form of a plaque or signage near the entrance, for instance.
Beyond IP, there are also relevant considerations from a data privacy and security perspective. With every input into an AI programme, you are training that model to develop its intelligence. You cannot guarantee that data will be safe and adequately protected from breaches and other cybersecurity risks. Those already utilising tools like ChatGPT should be careful not to include any client-specific or otherwise sensitive or personal information in their prompts, both from a legal perspective as well as from thinking of wider industry regulatory requirements and the potential reputational damage and the commercial impact breaking them would entail.
Architectural Technologists are certainly going to have to learn to coexist with AI, and many will be investing in ways to make the technology work for them. We are undoubtedly at an exciting frontier of development in truly groundbreaking technology. While the legal landscape takes shape, the built environment sector will be able to exploit the potential opportunities for growth and innovation. But it must equally be aware of the emerging risks by continuing to learn and adapt to the evolving legal framework.
This article is from AT Journal issue 150