Limitations of Current Artificial Intelligence (AI) Techniques

Published: April 3rd 2023
Limitations of Current Artificial Intelligence (AI) Techniques
Awareness, knowledge gaps and self-learning Artificial Intelligence

Despite the very high levels of success claimed for deep learning AI, it now finds itself fast approaching a barrier. On one side, we have the narrow AI normally based on deep learning that exists today. On the other side of the barrier, there is the land of  general artificial intelligence (AGI), an entity whose capabilities are essentially indistinguishable from those of a human.

 

If we want to realise AI as it is depicted in popular science fiction, we will need to overcome this barrier.

 

What about ChatGPT, and all those other AI-based applications using current technologies? Don’t they represent a significant step along the road to AGI? Paradoxically, the answer is a qualified 'no'. While some of the current AI applications give the appearance of sentience, they still depend to a large extent on human labour to source and curate the extensive volume of data which is needed to train them. Worse still, if they stray outside the specific areas which they know about, they can (and often do) generate total gibberish.

 

Currently, there is no AI which is capable of passing the Turing Test convincingly. To do this, an AI needs to be able understand that it has gaps in its knowledge and to learn for itself without human intervention. It must have a level of awareness of what it doesn't know which it can use to direct its independent learning.

 

There is a ‘knowledge barrier’, which is the dividing line between narrow AI which needs to spoon fed, and a more general form of AI which is capable of seeing the gaps in its knowledge and learning for itself.

 

To understand why this is a crucial issue, let us look at an example. Suppose you want to assess biodiversity loss by automating biodiversity cataloguing. You could do this by deploying a vision system which uses AI to identify the animals which it 'sees'. Now, you may expect current AI to really ace this task. Well, it can as long as it doesn't ‘see’ an animal too often which it has not been taught about. In the arctic wilderness inhabited in winter by only a small number of animals, such as polar bears and seals, it is likely to do just fine. But in Yellowstone National Park it will fail miserably. Why you may ask. Well, in a species rich ecosystem like Yellowstone, there is a very good chance that the system will 'see' animals which it hasn't been taught about. Narrow artificial intelligence does not do a good job identifying them as unknown and either rejecting them or, better still, determining their identity by asking human experts pertinent questions and/or automatically conducting appropriate directed Internet searches.

 

The capability to be aware of knowledge gaps and to be able correct them autonomously may not deliver a fully functional AGI but it would represent a major step forward in capability, and one which is essential for AI technologies deployed quite literally in the wild.

Share