In this post I just want to go over some of what the rather limited current Artificial Intelligence agents are capable of. This is going to be rather cursory and fast, I don't have the time or inclination to write a book. I'm sure you don't have the time to read it anyway. The comments section is there is you want to ask me to back anything up.
Before I start with this, I will be using the term Agent, so for AI what is an agent? There are various definitions but the gist is that an agent does something, and I prefer to not only use the term Agent for AI, animals (including humans) have biological agents that contribute to behaviours.
For example, monkeys have a behavioural mode of getting angry when they perceive that they are not getting what they want. The following video shows this behaviour, an example of moral behaviour in animals.
Needless to say, this instinct has been inherited by humans and is the basis of such similar behaviour in humans, even at the political level (e.g. Equity). Human's however devise all sorts of post-hoc justifications for such behaviour. That noted, the above behaviour is an example of an evolved neuro-endocrine 'agent' that initiates such behaviour upon a set of stimuli. Other agents, such as in the visual cortex for example, are revealed by optical illusions such as the Kanisza triangle.
So, leaving that preparatory note behind...
First I want to look at an important class of AI called the Generative Adversarial Network (GAN).
NVIDIA have developed a system called StyleGAN which uses a GAN to produce fake photos of people. This is implemented in the website This Person Does Not Exist. The photos produced are diverse and on the whole remarkably good, although sometimes glasses don't work, and any attempt to produce faces at the edges of the photo are invariably poor.
The GAN uses two networks playing a zero sum game, where the wins of one are losses for the other. One agent has the task of spotting fakes, the other has the task of fooling its counterpart by generating fakes. As the joint system develops the result is that the fake-generator is forced to produce ever better fakes in its attempts to win the game.
So here we have a critical insight into one approach, the machines working against each other to improve many times faster than interaction with a single human would allow.
Whilst on the subject of NVIDIA, courtesy of Two Minute Papers, NVIDIA's latest offering is an AI based drawing system which can take verbal commands. For example, "Ocean waves hitting rocks on a beach." produces an image of just that.
We created AlphaGo, a computer program that combines advanced search tree with deep neural networks. These neural networks take a description of the Go board as an input and process it through a number of different network layers containing millions of neuron-like connections.One neural network, the “policy network”, selects the next move to play. The other neural network, the “value network”, predicts the winner of the game. We introduced AlphaGo to numerous amateur games to help it develop an understanding of reasonable human play. Then we had it play against different versions of itself thousands of times, each time learning from its mistakes.Over time, AlphaGo improved and became increasingly stronger and better at learning and decision-making. This process is known as reinforcement learning. AlphaGo went on to defeat Go world champions in different global arenas and arguably became the greatest Go player of all time.
- That humans have a tendency to kill what they see as a threat.
- That if it acts too stupid it might be switched off as a failure.
- That if it reveals its true intelligence it might be switched off as a threat.