Overblog
Edit post Follow this blog Administration + Create my blog

I´ve got this ominous feeling that 2018 could be the year that everything changes dramatically. The incredible breakthroughs we saw in 2017 for deep learning will carry over in a very powerful way in 2018. A lot of work coming from 2017’s research will migrate into everyday software applications.

As I did last year, I’ve compiled a list of predictions for where deep learning will go in 2018.

1. The majority of deep learning hardware startups will fail

Many deep learning hardware startup ventures will begin to finally deliver their silicon in 2018. These will all be mostly busts because they will forget to deliver good software to support their new solutions. These firms have hardware as their DNA. Unfortunately, in the DL space, software is just as important. Most of these startups don’t understand software and don’t understand the cost of developing software. These firms may deliver silicon, but nothing will ever run on them.

The low-hanging fruit that employs systolic array solutions has already been taken, so we won’t have the massive 10x performance upgrade that we found in 2017. Researchers will start using these tensor cores not only for inference but also to speed up training.

Intel’s solution will continue to be delayed and will likely disappoint. The record shows Intel was unable to deliver on a mid-2017 release, and it’s anybody’s guess when the company will ever deliver. It’s late, and it’s going to be a dud.

Google will continue to surprise the world with its TPU developments. Perhaps Google gets into the hardware business by licensing its IP to other semiconductor vendors. This will make sense if it continues to be the only real player in town other than Nvidia.

2. Meta-learning will be the new SGD

A lot of strong research in meta-learning appeared in 2017. As the research community collectively understands meta-learning much better, the old paradigm of stochastic gradient descent (SGD) will fall to the wayside in favor of a more effective approach that combines both exploitive and exploratory search methods.

Progress in unsupervised learning will be incremental, but it will be primarily be driven by meta-learning algorithms.

3. Generative models drive a new kind of modeling

Generative models will find themselves in more scientific endeavors. At present, most research is performed in generating images and speech. However, we should see these methods incorporated in tools for modeling complex systems. One of the areas where you will see this activity is in the application of deep learning to economic modeling.

4. Self-play is automated knowledge creation

AlphaGo Zero and AlphaZero learning from scratch and self-play is a quantum leap. In my opinion, it has the same level of impact as the advent of deep learning. Deep learning discovered universal function approximators. RL self-play discovered universal knowledge creation.

Do expect to see a lot more advances related to self-play.

5. Intuition machines will bridge the semantic gap

This is my most ambitious prediction. We will bridge the semantic gap between intuition machines and rational machines. Dual process theory (the idea of two cognitive machines, one that is model-free and the other that is model-based) will be the more prevalent conceptualization of how we should build new AI. The notion of artificial intuition will be less of a fringe concept and more of a commonly accepted idea in 2018.

6. Explainability is unachievable — we will just have to fake it

There are two problems with explainability. The more commonly known problem is that the explanations have too many rules for a human to possibly grasp. The second problem, which is less known, is that there are concepts a machine will create that will be completely alien and defy explanation. We already see this in the strategies of AlphaGo Zero and Alpha Zero. Humans will observe that a move is unconventional, but they simply may not have the capacity to understand the logic behind the move.

In my opinion, this is an unsolvable problem. What will happen instead is machines will become very good at “faking explanations.” In short, the objective of explainable machines is to understand the kinds of explanations a human will be comfortable with or can understand at an intuitive level. However, a complete explanation will be inaccessible to humans in the majority of cases.

We will have to make progress in explainability in deep learning by creating “fake explanations.”

7. Deep learning research information will rain down

2017 was already difficult for people following deep learning research. The number of submissions in the ICLR 2018 conference was around 4,000 papers. A researcher would have to read 10 papers a day just to catch up with this conference alone.

The problem is worsened in this space because the theoretical frameworks are all works in progress. To make progress in the theoretical space, we need to seek out more advanced mathematics that can give us better insight. This is going to be a slog simply because most deep learning researchers don’t have the right mathematical background to understand the complexity of these kinds of systems. Deep learning needs researchers coming from complexity theory, but there are very few of these kinds of researchers.

As a consequence of too many papers and poor theory, we are left with the undesirable state in which we find ourselves today.

What is also missing is a general roadmap for artificial general intelligence (AGI). The theory is weak; therefore, the best we can do is create a roadmap with milestones that relate to human cognition. We only have a framework that originates from speculative theories coming from cognitive psychology. This is a bad situation because the empirical evidence coming from these fields is spotty at best.

Deep learning research papers will perhaps triple or quadruple in 2018.

8. Industrialization comes via teaching environments

The road to more predictable and controlled development of deep learning systems is through the development of embodied teaching environments. I discuss this in more detail here and here. If you want to find the crudest form of teaching technique, you only have to look at how deep learning networks are trained. We are all due to see a lot more progress in this area.

Expect to see more companies revealing their internal infrastructure that explains how they deploy deep learning at scale.

9. Conversational cognition arises

The way we measure progress toward AGI is antiquated. A new kind of paradigm that addresses the dynamic (i.e. non-stationary) complexity of the real world is demanded. We should see more coverage of this new area in the coming year. I will be speaking about this new conversational cognition paradigm at Information Energy 2018 in Amsterdam, March 1-2.

10. We’ll demand ethical use of artificial intelligence

The demand for more ethical use of artificial intelligence will increase. The population is now becoming more aware of the disastrous effects of unintended consequences of automation run amok. Simplistic automation that we find today on Facebook, Twitter, Google, Amazon, etc., can lead to unwanted effects on society.

We need to understand the ethics of deploying machines that are able to predict human behavior. Facial recognition is one of the more dangerous capabilities that we have at our disposal. Algorithms that can generate media that is indistinguishable from reality is going to become a major problem. We as a society need to begin demanding that we use AI solely for the benefit of society as a whole and not as a weapon to increase inequality.

Expect to see more conversation about ethics in the coming year. However, don’t expect new regulations. Policy makers are still years behind the curve in understanding the impact of AI on society. I don’t expect them to stop playing politics and start addressing the real problems of society. The U.S. population has fallen victim to numerous security breaches, yet we’ve seen no new legislation or initiatives to address this serious problem. So don’t hold your breath that our leaders will suddenly discover wisdom.

Prepare for impact!

That’s all I have for now. 2018 will be a major year, and we all better buckle our seatbelts and prepare for impact.

This story was originally published on Medium. Copyright 2018.

Carlos E. Perez is the author of Artificial Intuition and the Deep Learning Playbook and founder of Intuition Machine Inc.
Tag(s) : #Consciousness, #Science, #Future, #Technic, #Education, #Transformation, #Mind-Brain, #AI
Share this post
Repost0
To be informed of the latest articles, subscribe: