The intrusion of Artificial Intelligence (AI) into our daily lives has raised major questions of ethics among researchers, companies that are developing and using AI, and of course consumers. Already algorithms play an important if largely invisible role in our daily affairs, controlling everything from how our tax returns are assessed, to the ads that we receive online, to our insurability and calculation of life expectancy….and of course many, many more applications. Efforts to combat COVID-19 is the latest manifestation of this, from self-assessment tools put online by various jurisdictions in North America to the app used in some Chinese cities that assigns a green/yellow/red code to personal cellphones.  Without a green code, free movement is impossible. The app is apparently linked to police databases and tracks movements, yet another example of algorithm “mission creep”, even if the goal on this occasion is understandable. But algorithms, seemingly ubiquitous these days, are only a stepping stone on the way to Artificial Intelligence.

There are many definitions of AI but essentially it involves allowing machines to assess and interpret data and thus to make decisions that in the past would have been made by humans. Algorithms usually fuel the machine learning process so that true AI goes beyond just interpreting data. It allows the machine to “learn” from the data (discerning patterns) and in so doing improves its prediction and problem-solving capacities, and sometimes even to create something entirely new. So how does all this affect ethics, and more specifically the subject of this blog, copyright?

The ethics of AI is a huge issue, most recently raised at Davos by a group of tech CEOs including Sundhar Pichai of Google. Pichai argued for government regulation of AI in order to control its potentially nefarious uses, of which facial recognition technology is the most recent example of an application raising widespread concern. A number of large companies that use and develop AI have recognized the need to set out guidelines on use. There could be various motivations for this, ranging from wanting to rein in rogue operators who could gain an unfair competitive advantage, to recognizing the need to shape regulation before more restrictive rules are imposed by government, to eliminating the uncertainty that lack of current regulation entails. An example is Microsoft which, like other large companies engaged in AI has established a set of AI principles; these include fairness, inclusiveness, reliability and safety, transparency, privacy and security, and accountability. Microsoft, based in Seattle, has been pushing legislators in Washington State to regulate AI, covering topics such as data privacy, facial recognition, biometric data and ethnic profiling. These are laudable goals. However while guidelines for use, and even regulation by government, of data privacy and use of facial recognition are important, perhaps even essential as we move forward into a world where AI will play an increasingly large role in citizens’ daily lives, there remain a number of other areas where society continues to grapple with the effects of AI.

One of these is copyright. The creation by AI of a work of art, piece of music or written work potentially subject to copyright protection has already happened, raising a range of questions as to how to deal with such works. If only humans are capable of creating works that are subject to copyright protection (as the US Copyright Office has made clear in a recent interpretation), then how does one deal with works produced by a seemingly autonomous machine? This was among the questions examined by the Committee on Industry, Science, and Technology (INDU Committee) of the Canadian Parliament during its recent review of the Copyright Act, and is currently the subject of a review by the US Patent and Trademark Office (USPTO). The US Copyright Office provides some guidance on AI-created works, noting that it will not register works “produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author”.

But what about a machine (and they exist) that allows people to compose music (even though these people don’t even have the ability to read or write music), by manipulating various musical elements, such as rhythm, harmonics, etc. to blend them into a “new” piece of music? How much human creativity is involved? Not much, so is the end product a creation of the software, or is it the creation by a human enabled by the software?

We already have the example of photography, where human involvement in many cases seems to be decreasing. Initially photography was thought to be an entirely mechanical process, devoid of creativity. Today, most people would not agree, and photographs have been protected under copyright in most countries for many years. But as machines have progressed, there is less and less artistic creativity required to take a photograph, although one could argue that human involvement is required more than ever to take a good photograph. Nonetheless, today smartphone technology is so advanced that about all that is required to take a passable photograph is a reasonable eye for composition and light. The device makes almost all of the technical decisions and in some cases,  it even clicks its own shutter. Yet photographs, or at least most photographs, are protected by copyright. In the infamous Monkey Selfie case the putative issue revolved around who actually clicked the shutter (it was the monkey) rather than the creative effort surrounding the taking of the photo by wildlife photographer David Slater. When the US Copyright Office clarified that a copyright could be held only by a human, it opened the door to the whole question of what to do with AI-created works, as I noted in a blog post (https://hughstephensblog.net/2017/09/25/the-monkey-selfie-case-will-it-have-broader-repercussions-for-ai-and-copyright/)

In determining who should own the copyright to a work produced by a computer program, for example, there are a couple of schools of thought. One is to confer the copyright on the software programmer, or perhaps to share the copyright between the software programmer and the artist manipulating or operating the program. Another is to take the position that even with a software program, there is no creation without human intervention, and the copyright should go to the artist using the program, in the same way that an author claims the ownership of a book written on a laptop, not the producer of the software or hardware that makes the laptop function. Under this interpretation, there is always human judgement involved in directing the program, making choices, steering the system, accepting its outputs, etc., much in the same way that a film director creates an audio-visual work. These are the kinds of issues that the courts and legislators are beginning to address. There are broad implications…and even ethical considerations.

If work produced with or by AI (even if under human “supervision”) cannot be copyrighted, what impact will this have on creativity? In this case, any AI-produced output would be in the public domain immediately, with no economic return to or incentive for the creator. This would not only discourage the development of AI but also handicap creators. What happens if there are multiple creators, for example, a software programmer, a technician who manipulates the program to create a new work (such as a computer-generated image), and perhaps even the company owning the proprietary software? This would be a nightmare for companies seeking to licence the end product, not knowing who has the copyright. Given the potential for confusion, sticking to basic principles of creativity and ownership will be important.

There is another important issue—infringement–that will need to be addressed if AI-generated works are denied copyright protection. What happens when there is unauthorized reproduction of an AI-created work? If there is no copyright, there is no infringement. A Chinese court recently addressed this issue, finding that an automated article written by a program called Dreamwriter, created by Tencent, which had been copied and published without permission by another Chinese company, Yinxun, was nevertheless subject to copyright protection because it met the originality test through the involvement of a creative group of editors. These people performed a number of functions to direct the program, such as arranging the data input and format, selecting templates for the structure of the article, and training the algorithm model. In other words, there was sufficient human authorship—and thus there was copyright, and infringement of copyright. It is worth noting that in addition to human attribution, the work must also be sufficiently original and not just a list or compilation of facts.

There is a further infringement scenario to consider. What if a computer-generated program is itself responsible for the infringement? After all, artificially created content is based on being fed examples of original content so that the algorithm can “learn” the patterns that go into content creation. Under this scenario, if there is no human creator, who is liable for the infringement? This demonstrates why there needs to be human attribution in the process.

Along with human interaction comes ethics—judging what is “right” or “wrong”, whether it is a judgement regarding infringement of copyright or some other aspect of the application of technology. I would submit that an algorithm is incapable of making true ethical judgements. Therefore, it is essential to retain human responsibility for the output of Artificial Intelligence, and its uses, even if the human touch is relatively tangential. AI cannot “sign off” a work, taking into account all the ethical, legal, social and other consequences of the final product. That requires the human hand, and with the application of the human hand comes responsibility, accountability and “ownership”.

Yet another issue related to AI and copyright is the use of copyrighted material to “train” automated systems. For example, if a machine is to be trained to write a romance novel, it must be fed large numbers of romance novel models in order to be able to understand the basic premise and structure. (You know…. Boy meets Girl 1; Girl 2 appears; Girl 2 tries to steal Boy’s heart; Heartbreak ensues; Boy comes to his senses; Girl 1 triumphant; Love Conquers All; Cue the Music). I am not sure how sophisticated an algorithm you need to create a good romance novel, but suffice to say that at least some copyrighted material would have to be fed into the process.

This raises issues not just over the appropriate use of copyrighted material, but, as blogger and copyright lawyer Neil Turkewitz has pointed out, broader ethical issues as to whether it is acceptable to ingest another’s work without consent just because it is “efficient” to do so. Most countries already have fair dealing exceptions that allow data mining within set limits, or in the case of the US, there is a fair use structure that can be used to adjudicate data mining acceptability within established parameters. These rules have worked pretty well to date, and there appears to be no compelling need to broaden them.

In the final analysis, when dealing with how copyright applies to AI and AI-generated works, it is important to stick to basic principles. Copyright has existed formally in western jurisprudence for over three hundred years, since the English Statute of Anne in 1709, and indeed there are many examples of copyright pre-existing this legislation. Copyright law has always been able to adapt to technological change while maintaining the basic principle of protection for the author so that she/he can control the dissemination and use of the work plus earn a fair return for effort. Even in the case of machine and AI-generated works, there is always a human hand behind the creation. It may take some ingenuity at times to determine which human hand is predominant, but surely that is not beyond the ability of our learned jurists and legislators. The key is to stick to basic principles while letting the law adapt rather than trying to create new regulations to address AI issues.

We will have to see where the USPTO review comes out. In the case of Canada, the Parliamentary Committee came up with a recommendation of sorts, although it was remarkably non-prescriptive. The Committee quoted testimony arguing that a work created exclusively by AI, without any human intervention, should not receive copyright protection but balanced this with other testimony to the effect that human skill and judgement are almost always required to direct AI systems. In leading to its recommendation, the Committee stated that;

“Parliament should enact legislation to help Canada’s promising future in artificial intelligence become reality. Our own legislation, perhaps informed by approaches taken in other jurisdictions, can be adapted to distinguish works made by humans with the help of AI-software from works created by AI without human intervention. The Committee therefore recommends:

That the Government of Canada consider amending the Copyright Act or introducing other legislation to provide clarity around the ownership of a computer-generated work.”

Exactly how the government would provide that clarity and what factors should be taken into account in so doing was left for another day.

To me it is evident that despite enablement by machines and the creation of “new” works using AI through compiling, synthesizing, modifying and re-arranging, at the end of the day there will always be a human creator behind the AI who will judge whether and when the AI work is complete. This person will ultimately take a position on any ethical questions that may arise, and the output will be subject to human judgement in terms of choices, editing, finalization, and release.

AI in many ways is just another manifestation of humans using aids and props to improve and expand creativity, whether it is a Moog synthesizer, a smartphone camera, a music remixer, or mechanical art, so I don’t see why AI enabled works should be put into a new category. We may have to do some adaptation and fine-tuning to ensure that the appropriate human creator is credited with the work, but at the end of the day underlying principles and established legal doctrines should continue to apply with respect to copyright in AI-generated works, just as commonly-accepted values and ethics should apply to the application of AI in other spheres of daily life.

This blog was originally published on March 16, 2020 on Hugh Stephens Blog.