What Will AI Look Like In 10 Years? – SemiEngineering

Not all AI systems will behave the same, and that could be a big problem.

Theres no such thing as reverse in AI systems. Once they are let loose, they do what they were programmed to do optimize results within a given set of parameters.

But today there is no consistency for those parameters. There are no standards by which to measure how AI deviates over time. And there is an expectation, at least today, that AI systems will adapt to whatever patterns they discover in order to optimize power, performance and whatever other metrics are deemed important.

This is where the potential problems begin, because most of these algorithms are so new that no one really knows how they will age over time, or how they will be affected by the aging hardware on which they run. There are no rules or standard ways to define behavior, and there has been very little research (if any) on how systems that are not safety-critical will behave throughout their lifetime. In fact, its not even clear if research could be effectively conducted these days because the software and hardware are in an almost constant state of evolution. Its like trying to measure the impact of screen time on users with the introduction of the first smartphone.

There seems to be little concern about this across the tech industry. This is exciting new technology, and the amount of compute power that will be available to solve problems in the future probably will dwarf everything that has been achieved so far, at least in percentage terms. There are estimates across the industry ranging from thousands to a million times the performance of todays systems, particularly when the hardware is optimized for the software, and vice versa. Thats a lot of compute power, and thats only a part of the overall picture. Machines will talk to machines, and they will train other machines, and at this point no one is certain what to fix or how to fix it if something goes wrong.

This is always a challenge with new technology, but in the past there was always a human in the loop. In fact, one of the reasons we hear more about AI as a tool, rather than as an autonomous technology, is because some people in key roles are worried about the liability implications of unleashing technology on the world before they know how it will actually behave. Having a human in the loop greatly reduces that liability, particularly if you read through all of the end user license agreements. Machines talking to machines dont have the power of attorney (or at least not yet).

There is a lot to be said for AIs potential, both positive and negative. It is a technology that will be with us for a very long time. It will create jobs and take jobs, and it will restructure economies and human behavior in unexpected ways. On the design side, we will need to learn to utilize the best parts and minimize the worst parts. But it would greatly help if we understood better how to predict its behavior and what caused it to behave in ways that it was not trained to do. When that happens, we also need to know what other systems it communicated with and what the potential impact was on those systems.

Finally, for anyone designing these systems, wed greatly appreciate the addition of an easy-to-find kill switch in case something goes really wrong. Failing gracefully is a nice idea, but not everything goes as planned.

See the original post:
What Will AI Look Like In 10 Years? - SemiEngineering

Related Posts