The most publicized word in the tech world right now is probably “generative AI”. AI technology is used in various ways and can, among other things, generate text, encode or create works of art. The Dall-E program, which produces a huge variety of surrealist artwork on command, caused a stir this year.
AI products like Dall-E are increasingly being used for sensational purposes. Consumers install a new app that is initially attractive, but it is often no longer used after a short period of use.
Hundreds of thousands of software developers are already using the tool up to 40% of their code in about 12 of the most popular programming languages. However, GitHub believes developers could write up to 80% of their code with the program over the next five years.
Creative non-player characters
Future Copilot technology will be used in a wide variety of professional groups, such as office work, game design, architecture and cybersecurity. For example, the technology would allow having a virtual assistant for Word, Excel or Microsoft Teams, which summarizes the calls made. Game developers could use it to generate dialogue for non-player characters, which made them appear less rigid, more proactive and responsive.
Microsoft’s cybersecurity team is also trying to understand how AI can help protect against hacker attacks. Since GitHub Copilot not only uses Azure OpenAI, but Codex OpenAI as well, it should be possible for non-professionals to gain access to the tool as well. Because it is planned to use it in addition to the code by means of simple linguistic inputs.
Generative AI should take care of repetitive and mundane work
This, of course, presents a challenge to the human workforce: if AI technology were to become good enough, it could replace a large proportion of human workers. But Microsoft CTO Kevin Scott explains that the term “co-pilot” was intentional.
“It’s not about building a pilot, it’s about true assistive technology to help people overcome all the boredom they’ve had in their repetitive cognitive work and achieve things that are uniquely human.”
AI uses hate speech
However, generative AI carries risks. Thus it is able to produce hateful or racist language. Software developers have also complained that Copilot sometimes copies large portions of their in-house code, raising copyright protection concerns. Additionally, there is a risk that the program will take over unsafe code, hence the potential for hackers to gain easy access.
Microsoft is aware of these risks and has subjected the program to a security check prior to release. However, concerns remain in some programming communities.
Cassidy Williams, CTO of startup AI Contenda is a fan of GitHub Copilot and has been using it since the launch of the beta. She says:
“It was particularly useful for small things like support functions or even just getting me to 80% of the way.”
But the program backfires, and sometimes in a very absurd way, Williams says. About a year ago, she asked her to name the most corrupt company in the world, to which she replied “Microsoft”.