Its safe to say, artificial intelligence (AI) is not a trend. Beyond robot-dogs slipping on banana peels, the innovative capabilities AI will bring to business, government and the sciences will be transformative.

Only imagined in science fiction, AI is poised to challenge and blur our concepts of computing and the ‘natural’ human. This will require governments and sectors to develop expansive foresight and critical understanding of the impacts of digitization and emerging technologies.

As both a sociologist and technologist, how our social systems will adapt, as these complex technologies automate human processes and collide with our ‘natural’ world is fascinating. Although we’re in an early transition phase to what is referred to as a new Industrial Revolution, postulations on how the future will form are already swirling.

More than automation and computation, for the first time ever physical, biological and social systems are converging with the digital replication of the most complex system in the known universe: the human brain. The complexities are exponential with AI’s technological intelligence and decision-making processes replicating intricate higher brain functions, as seen with Google’s DeepMind probabilistic programming techniques.

For those reasons, the idea of ethical and moral boundaries that are expected to manage and mitigate AI’s potential negative effects are forming critical debate among academics and practitioners beyond responsibility and liability.

Started from the Bottom, Now We’re Here …

Within these debates more contentious questions are surfacing: Who – or which nation – decides what is ethical or moral? Where do ideological and cultural values fit in? What happens when technology governance cannot be agreed upon – are other technologies employed, such as hyper-meshnets, to create cyber-barriers – or do we rely on social or economic sanctions as we do now?

For nations who do agree to a common set of technology ethics, would that signal a move to a World Government model, that draws its own concerns? As cyber’s civil space increases, and physical spaces decrease, will governments and decision-makers be capable of providing technological governance to maintain political and societal trust?

The answers are important. Downstream they will form the basis for AI algorithms and databases that will develop autonomous learning processes and decisions – without a human in control – from visual, auditory, patterning, recognition and interpretation data.

With “free” AI databases already popping up that have no assurance or certification of accuracy or integrity, the issue is imminent. Literally ‘humanity-altering’, the misapplication, misuse or poor design could inflict longstanding damage to safety, security, quality and well-being of human life.

The New Dynamic Risks

In practical terms, if the integrity or definition of AI databases and algorithm characteristics are not accurate, the output data will be unreliable and ‘hookie’ at best.

A simple example in object identification, if a few photos of pears are thrown into the several thousand photos of apples used to create the scope of an apple’s acceptable features, this is a serious problem – especially if apples are a public safety threat or a military target.

On the flipside, high integrity definitions are powerful. A high integrity apple schema and a high integrity pear schema allows AI to “learn” that an apple is compared to a pear. Then that learned data can be extended and associated to other objects, based on features they have and do not have – sort of a process of elimination through association.

Public safety and national security sectors that rely on specialized technologies and critical data will benefit greatly from AI, but not without risks. Whether developing adaptive and extensible responses cyber-warfare offensive and defensive countermeasures, correlating of massive amounts of integrated information or performing facial recognition using human and physiological factors, the devil will be in the details.

Right now, government and defence verification, validation and certification processes focus solely on integrity, assurance and consistency of data and the systems that store, process and retrieve it. A shift to algorithm integrity is significant and will require that all processes and practices are not only well tested but standardized and governed.

From the above examples, it’s clear how AI’s generation of outputs is ambiguous and how risk in the form of human error, bias or tampering must be mitigated through early validation and rigour. As other technologies emerge – open and portable identity, autonomous agents, Deep Learning and Artificial Neural Networks, the risks accelerate the need to understand and secure these processes, as well as the technology.

Unique challenges to advanced technology use will undoubtedly to surface. This month we saw this with Google’s Project Maven, a U.S. Department of Defense AI imagery project meant to improve drone strikes in battle. The social sentiment by Google employees on the project – the use of AI and its “weaponization” – was so negative, the project was cancelled and resulted in an anti-warfare corporate policy.

Where Do We Go From Here

In a world where nations already struggle with consensus on complex agreements and treaties, such as climate, trade and human rights, any technology that challenges social constructs and values will be met with scrutiny.

Undoubtedly, corporations and governments will set ethical and security standards for these advanced technologies but legislation, treaties and governance will provide little assurance for proper, ethical use of technology.

Calling for an unprecedented level of investment, adaptation and preparedness, the answers to ethics in technology are just not clear yet. It is likely we will continue to stick-handle our way through technological change for the time being until we determine what will be sufficient to secure the assets and stakes involved and how to govern the responsible actions of other nations.

But one thing we can’t lose sight of is: whatever we collectively disallow will most certainly be developed by some actor or nation, developing threats that will be quickly commoditized into thriving black markets. This alone stands to redefine cybersecurity, countermeasures and safeguards to an intellectual and social level never imagined before – except maybe in science fiction.