Articles, Blog

El futuro del trabajo y el trabajo del futuro – Dr. Jorge Grünberg

November 7, 2019


Thank you for the invitation: it was an honor to be
made a member of the academy. Why does this topic concern us? Why do robots generate more social anxiety than
other inventions or advances from previous decades? Why didn’t DNA, or cell phones or penicillin
generate such social anxiety at them time? They created major revolutions in health,
work, technology, economy and research. But they didn’t generate the current social anxiety. Not a day goes by without changes
in public opinion on this topic. Why do robots generate so much social anxiety? Because they trigger a psychoanalytical
reaction in us. The generate fear in us, a primordial emotion. That’s why there is unexpected feedback from newspapers
or social movements to technological developments. Even abstract concepts like machine learning
generate the fear of being replaced. From when we are born we worry about this,
that a sibling will replace us in our parent’s affection. In this case we are afraid for our jobs.
We are afraid something will replace us. This is another wave that is coming after the
introduction of free trade agreements and off-shoring. We thought our jobs would go to India,
Pakistan, Vietnam or lower paid countries. Then came the social upheaval caused by immigration,
leading to populism and changes in politics
due to fear of immigrants. This is fear is also caused by the
fear of replacement. This third chapter, after globalisation and
immigration, is the replacement of our jobs by robots. Instead of our job going abroad or to someone from
abroad, it will become virtual, it will disappear. Jobs will disappear from the human world,
generating even more concerns. Taking some words of advice from my Oxford tutor,
I’m going to start with the conclusion. So the question is: are we heading for a dystopia?
Are we heading for a society with no jobs for humans
or only low paying jobs? My conclusion, as I will argue later, is no. There will be problems, just like in there were
in the 1st, 2nd and 3rd Industrial Revolutions. In the 4th Industrial Revolution the problem will not be
lack of jobs; it will be lack or shortfall of skills. Technically speaking it won’t be a lack of knowledge,
it will be a skill gap. A gap between the knowledge needed for the new jobs
that will arise and people’s available skills. Stocks of skills and the flows of knowledge will
need to be compared with the skills needed for new jobs. If trucks become self-driving, many truck
driver jobs will disappear. The same will happen with self-driving buses. But other jobs will arise: digital map designers,
sensor maintenance technicians, vehicle software
programmers and so on. But to what extent can a bus or truck driver be trained
for a new profession, and at what cost? The real problem is a social prevision one,
associated with a gap in skills and not in employment. In the 21st century, talent will be a
critical resource for development. We don’t retain talent, since many people leave and,
ultimately, we do not attract talent. We don’t retain talent, since many people leave and,
ultimately, we do not attract talent. The arrival of choreographer Julio Bocca
shows how attracting talent can shake things up. Attracting talent to the country implies
changes to immigration policies. It cultural outlook is needed because talented
people don’t want to come to a cultural desert. There are two main dialectic issues in 21st century. One is today’s and the other, similarly troubling,
is the dialectic between freedom and innovation. Until now we had assumed that innovation could not exist without freedom. Since the renaissance we had assumed that innovation
could only be sustainable in a free society. There is an early stage of progress where authoritarian
states can develop, but they run into a wall. At one point they cannot advance further,
as historians say happened in the Soviet Union. There came a moment when a top-down society
could not manage an information society. This happened in the two Koreas: in the 2 decades
following World War 2, North Korea had a greater GDP than the South. East Germany had a higher GDP than West Germany,
in the first decades. When these societies become more complex and technology
is their motor, they are no longer sustainable. This is why South Korea, Taiwan and Singapore had to become democracies. They could not progress further if they
remained authoritarian countries. This gave us peace of mind, because if freedom
of thought was essential for innovation, then moral
and material progress would go hand in hand. However, China provides us with a major counter-example. Since the 80s when China began to grow, academics
have forecast that it would face the need for greater democracy. This is not the case, and with 5G technology,
China is a world leader in technology since
the invention of gunpowder. The Chinese example leads to the question –
Is the relationship between freedom and
innovation obsolete? How does this change our views on enlightenment,
since the renaissance until today? Are there no longer limits on material progress
without moral improvements? If there is rapid material enrichment accompanied
by moral and spiritual decline,
this could lead us to undesired consequences. Now I want to focus on the main topic: the
dialectic between artificial intelligence
and human intelligence. We are living the 4th Industrial Revolution, where
human actions are replaced by algorithms. The 1st and 2nd Industrial Revolutions replaced
human power with oxen or steam. The 3rd Industrial Revolution automated
human actions like dividing, multiplying,
indexing and searching. In the 4th Industrial Revolution, the main
novelty is the automation of non-programmable tasks. They are no longer human actions easily
defined in algorithms. They call for skills like common sense, judgment,
empathy, emotion or intuition. Machines would not be able to reproduce these
human actions without a conscience, emotions or intuition. Now in recent years and even months, machines
have broken into fields previously reserved to human minds. The novelty is that Artificial Intelligence technology has gone from 1.0 to 2.0
in leap from the deductive era to the inductive era. The first stages of Artificial Intelligence dating back to the 50s
did not progress very far due to a lack of
adequate technological and mathematical tools. But the concepts of neuronal networks,
for example, already existed. The first stage of the deductive era involved expert systems
where a human expert would establish all the rules needed,
for example, to diagnose a throat infection. All kinds of rules – whether to authorise a credit,
or if a signature might be forged. This was applied in limited spheres of knowledge,
asking multiple experts to explain their tasks
and how they did their jobs. This would be translated into rules
for making expert systems. In some areas, like chess, it worked very well. Chess has very few uncertainties,
with pre-established rules. So, for some things deductive AI worked well, but in
other cases it could not progress and ran into a wall. In the next stage, the Inductive era, the star technology
is neural networks, or machine learning. I mention two books: the first, published in 2004
by Levy and Murnane, perfectly reflects the deductive philosophy. It reflects the thoughts of two top MIT and Harvard
academics on artificial intelligence in 2004. For the inductive age, “The Second Machine Age”
by Brynjolfsson and McAfee shows how much progress
has been made in what machines can do in just 10 years. In the deductive age, Levine and Murnane concluded
that we already had machines that could win at chess
and other advanced tasks. They raised concerns about human tasks that could
be replaced and were beginning to be replaced. ATMs substituted cashiers, car parks became automatic
and series of routine, predictable and programmable
activities were replaced. However, the also book contains a famous example
-they now see the funny side of- of things that will
never be automated, notably self-driving cars. “Driving will never automated” – this is not the first
time science has proved someone wrong, but it is
unusual in such a short space of time (6 or 7 years). Just three years later, Google already had a fleet in
California, followed by others, including the
current leader, Waymo. Livine and Murnane’s automatability analysis became
obsolete within just a few years because it was
based on deductive technologies. The could not imagine another automation model than
step-by-step analysis of what a human would do. Already in 1933, Gödel had mathematically demonstrated
that deductive reasoning had its limits. The inductive age arises when expert systems and
their technologies run into a wall, called Polanyi’s paradox. Polanyi’s paradox states that we cannot explain
many of the things we know how to do. When a bird flies by, we can identify it with just
a fleeting glimpse, this is a hard-wired ability. Imagine trying to write an algorithm for riding
a bicycle – it’s something we do but can’t explain. When we walk we avoid obstacles like chairs
automatically, in an unconscious way or subconsciously. These subconscious tasks are those that defeated
the first generation of great computer mathematicians
seeking to create artificial intelligence. This defeated Marvin Minsky who said “I give up,
the machine can’t even serve me a cup of coffee”. Abstract thought is much more recent than all
these skills, prehistoric man had innate survival
instincts, like running away from mammoths. Man could sense danger in a forest or from a rival
tribe member, these abilities evolved over
millions of years. Evolution has hard-wired these subconscious
sensor-motor skills into us, they are easy to do
but very difficult to explain. By contrast, with chess, it is very hard to play
well but very easy to program. In the inductive age, the key factor is that
traditional programming does not allow processes
that cannot be described to be automated. Many people are taught to program, but it is almost
impossible to program something that cannot be
described step-by-step. AI 2.0 techniques seek to overcome Polanyi’s paradox
by using tools that work inductively. Instead of proving theories with data, these
tools take the data and seek to generate theories. The outcome of this was that, in 2017, AlphaZero
was able to beat Stockfish at chess,
without human intervention. By overcoming Polanyi’s paradox, the economic
question is how does the comparative advantage
between humans and machines change? In the job market, these performance statistics
can tip the balance. Therefore, it is now possible to automate
non-programmable human tasks – which ones? You might find that what I saw now will be wrong
tomorrow, since things are changing every day. Maybe a nurse’s task like inserting a catheter is
possible today, but not inserting a contact lens
– this could change from one day to the next. Now tasks like driving vehicles, call centers, portfolio
management or legal discovery are all automated. The question is not what CAN manage do,
but rather, what CAN´T they do?
A field whose scope is narrowing by the day. Having said all this, we must ask, what changes
will technology have on employment? There are 4 main blocks, although mainstream media
and less specialised studies tend to focus
on the jobs that will disappear. Jobs will disappear because they have always
disappeared, like elevator attendants with the
advent of the automatic elevator. The same goes for stagecoach drivers or farriers,
now restricted to racetracks or riding clubs. These professions used to employ millions of people. In 2013, two Oxford scholars, Benedikt and Frey,
published an article stating that this block
made up 47% of jobs. This gained them great publicity, but I think their
methods were flawed, and by their own admission
they are now reexamining their findings. The consultancy firm McKinsey estimates jobs
lost at around 5% in the coming years. A human job is lost when all its tasks are automated. Jobs can also be lost because some of
their tasks are automated. For example, one call center coordinator can handle
100 lines by supervising voice recognition systems. So if one operator and couple of programmers
can replace 100 call staff, then jobs are lost. However, jobs can also be created, like map
programmers, sensor technicians, digital
communications programmers and so on. But these jobs gained are high-skill, high-pay jobs;
a lot of knowledge is required to earn well. The problem is that those whose jobs disappear
will need to bridge the gap to the new jobs
by acquiring new skills. Furthermore, they must be willing and capable
of learning and paying for the training
and be prepared to have mobility for those jobs. New jobs will be created, but the connection
between the jobless and these jobs will not be obvious,
a lot of work needs to be done. Not only will the jobs be created, but already,
20% of jobs in the US did not exist in 1980. We’re talking about a huge renovation of the
active job portfolio in the economy. The fourth group concerns jobs created and
shows how many analyses have been mistaken. Many jobs are created through greater productivity
and higher-paid employees who spend on cheap
robot-produced goods like telephones, cars or services Health, education or rental costs decrease while
increasing the number of low-skill/low-pay jobs. This generates jobs for cleaners, security guards,
hair stylists, masseuses, dog walkers or
low-skilled technicians’ jobs with low pay. This gives us two variables that reduce employment
and two that increase it, connected by how
our societies can organise training in new skills. Without knowledge, people cannot perform new
jobs and countries will not be able to progress
without an adequate stock of knowledge. Apart from this stock, a country cannot progress
without an adequate flow of knowledge. The stock of knowledge is less important
because technological change means that
obsolescence of knowledge happens faster. If a highly educated country, like Uruguay in the 50s,
a knowledge stock will not take 30 years to
become obsolete; it will only take 4 or 5 years. Not only is the stock of knowledge important,
but so is the flow of knowledge citizens have access to. So, although jobs will be created, in the new
geopolitical context of the knowledge society,
each country will experience different effects. Depending on the knowledge stocks and flows, the
mix of jobs available to citizens will be different. If Uruguay wants to implement job-creation policies
in the country, it is not the quantity of jobs
that matters, but rather, the qualitative aspect. If I generate 900 thousand jobs for cleaners,
gardeners or dog walkers, society will be poorer,
more unequal, dissatisfied. On the other hand, if we generate jobs in the upper
quadrant, there are many technological, management,
consultancy and professional jobs. This same number of well paid high quality jobs
will ensure a more satisfied and ethical society and,
above all, a sound basis for future progress. How Uruguay fares in coming years depends
on the quadrant we occupy. We could have a reduced group of people
-the “1%”- living very well in houses behind walls,
while many others are just subsisting. Or, by contrast, we can manage to create
a number of high-qualification high-paying jobs that
give people opportunities. In short, technological change will lead
to constant changes in the way we work and
people will constantly need new knowledge and re-training. So, the answer to the effect of technology on jobs
lies in two variables: one is the rate of technological
change and the other is speed and quality of learning. As the rate of technological change increases,
so will the speed and quality of learning new skills
need to increase. If the rate of technological change is very high,
this does not depend on Uruguay or government or
institutional decisions to be isolated from change. In this global world, technological change will reach
Uruguay, just like pollution, drugs, cyanobacteria,
fake news, computer viruses or superbugs. All this will reach Uruguay, there is
no option to industrialise internally. If the rate of technological change accelerates,
then our re-learning systems will need to be very
efficient, and they are not – they are non-existent. We have no effective adult re-learning system
in Uruguay, the social prevision system is
focused on what will happen after we stop working. A social prevision 2.0 system will have to contemplate
how to ensure workers can continue working. In short, the challenge for the education systems
is not just to educate people but also
to train them to be educable. Educability is the ability of people to adapt
quickly to new concepts. The current Uruguayan education system is
educating too few people, and educating them
poorly, as PISA and national evaluations have shown. Furthermore, it does not generate educable people,
and our society is general is very reticent to changes. The redesigning of educational systems will be a
critical path to prosperity, posing a great challenge. Talent, rather than capital, will be
the critical production factor. So much so, that in June, a top self-driving vehicle
company, drive.ai, went under and nevertheless, Google
invested $200M in this failed company to acquire its talent. The company had 70 top engineers, PHDs and
AI experts which Google saw as a bargain. This would
not have been possible if the company wasn’t broke. We have to redesign education systems,
for which we face two major barriers. I just want to mention that re-learning and
re-training is public asset, not a private asset. If a worker sells his car or mortgages his home
to invest in new education, this still may not be enough,
because he might be alone in doing so. So, when a foreign or local investment project
arises, it will probably not find the engineers
and PHDs it needs and won’t choose that country. This is Uruguay’s case, we attract industrial
plants, but not research centers for Microsoft,
Google or Facebook. We aren’t attracting research laboratories, despite
the presence of the Pasteur or Clemente Estable Institutes. We don’t attract high technology projects because
they can’t find the talent they need in Uruguay. In the future, talent will be the critical factor
and not a private asset because those who invest in
their own education will end up leaving the country. High technology investors must perceive the
availability of the talent they will be needing,
this is why it is a public asset. What fiscal instruments exist to foster re-learning?
One of them is the re-training or re-learning account. This is already working in France, as a public
policy with approved learning providers
where people can spend a 2000 euro grant. Others suggest taxing robots, which is a bad idea,
it would be like taxing the first airplanes or tractors
in the agricultural revolution. Then there’s basic universal income, which is
unjustifiable and unpayable, and ethically I think
if you want something you have to work for it. Beyong the economic aspect, re-learning systems
represent a huge educational challenge. The comparative advantage of humans, now that many
human tasks can be automated, will relegate
us to our social-emotional skills. Creativity for example: machines can now
take investment decisions or decide where drones
should launch missiles. But, we still don’t have machines that can show
empathy to people in distress, like psychotherapists. We don’t have machines capable of creativity,
to replace Bob Dylan or Ruben Rada, they don’t exist. Machines can’t exercise leadership
or foster allegiances. All these human qualities cannot be automated
and are unlikely to be for a long time to come. We need to teach humans these skills, which is
difficult since schools were designed to take people from an
agricultural world into an industrial context. It’s easy to teach reading, writing, arithmetic
or archiving, since these are human inventions. Humans created alphabets and counting systems
and, by applying reverse engineering,
we are able to teach them. However, non-cognitive skills are sensor-motor
in origin, they weren’t developed by humans. They evolved biologically, so how are we supposed
to teach empathy in our schools and universities? How can we systematically teach creativity,
or intuition, or trustworthiness? Basically, we don’t know yet, and for
course designers it’s like climbing the Himalayas. Furthermore, in the future, teaching processes
will need to be more personalised, since everyone
will be at a different level. I know something about computing, maths, chemistry
and need to learn something specific, meaning that
everyone is a different level. We will need to teach according to people’s
knowledge stocks, cognitive background,
personality, age or support networks. Further in the future, we are heading towards bio-teaching,
based on precise genetics or brain configuration. Learning will need to adapt quickly,
every 3 years, 1 year or 6 months, which will
be impossible using last century’s cognitive models. So, to round off, Artificial Intelligence
not only poses problems, but it could
also be part of the solution. To help us see if AI can be part of the solution,
we will examine Bloom’s taxonomy. In the 80s, Bloom was a psychologist who made
a revolutionary discovery, which in hindsight
seems intuitive, but he raised it first. He took 3 groups of pupils. The first was a normal
class – 1 teacher and 30 students, or “frontal teaching”. The second group applied mastery learning, where
a student cannot continue to the next level
without mastering the previous topic. The third group applied mastery learning,
but with an individual tutor for each student. The findings were overwhelming-the third model
produced vastly better results than the others, like
frontal teaching, used in Uruguay and most of world) He found that the tutor model allowed 98% of
students to have better results than
the control group. This is known as the 2-sigma result;
two standard deviations above the norm. This was revolutionary technically but also morally,
which is why it attracted so much attention
because it means that everyone can learn. Unfortunately, the current organisation of
teaching means that as teachers we take a group
and already know what is going to happen. We now that 10% will do well, 60% will
be average and 30% will perform poorly. Expectations about students are pre-established
at primary, secondary and university levels. What Bloom showed is that the problem is
not the students, it’s the methods. If you use the methods, everyone can learn.
This is morally revolutionary. The graph shows that the group with
tutors is 2-sigma factors ahead. Bloom’s taxonomy is problematic,
despite seeming to be the perfect solution. The problem is that the tutorial system
is not scalable; there are not enough resources
or tutors for all the students. Technically and morally I know how I should
teach, but logistically and financially I can’t
afford to offer this method, hence the problem we face. The interesting question today is whether AI
can resolve Bloom’s problem. Until now computer teaching has been limited
to multiple choice and similar techniques. I can’t ask a student to explain the causes
of the French revolution and read it using a computer,
I have to choose. I have to choose between setting multiple
choice tests to assess 500 students quickly,
but at basic levels of teaching. Or, I can ask them to write essays, with
research and examples, which for 500 students
would be very time-consuming. Now with machine learning, there may be prospects
of machines capable of not only evaluating natural
language, but also, empathy or versatility. Responding naturally to the individual
application of knowledge by students. Until now this could not be done for two reasons. First, machine learning needs huge databases
and this data about our students is not available. We have access to huge amounts of stock market
transactions, permitting AI portfolio management programs. Some countries have accessed to large-scale medical
or consumer behavior data or traffic information,
used by self-driving vehicles. But in education, problems like privacy arise,
from parents or students, or the information is not
standardised or digitised. Some countries grade from 1-12, our universities
use different marking systems, there are no standard methods. This non-standard information available is fairly
clean or precise, but it is scarce. Another problem is that machine learning is much slower. A human can learn to drive in 20-40 hours,
while the leading self-driving system, Waymo, has 10 million
miles of learning without yet mastering driving. Someone can learn to drive in Montevideo and
they can then drive in Madrid. This transfer of learning in humans is fairly natural,
while for machines it remains problematic. Then we ask if AI will replace teachers, I would
say no, because I want to carry on working. Self driving cars mark the limit, since driving
is easier than teaching, with Tesla postponing
the launch of their system from 2021 to 2027. Other innovations like self-parking will emerge
but 100% automated vehicles are not yet ready. Teaching is harder than driving. Nevertheless,
our profession will change because
teachers will have to become augmented. Just like accountants, civil engineers, chemists
or systems analysts will have to adapt,
teachers will also have to adapt. They will need to work with
flip learning or automatic tutor. Recommendation systems like those in the
leisure industry will also be used,to recommend
bibliographies or reusable learning resources. So, to conclude, if we can solve Bloom’s problem,
we will achieve a revolutionary leap in technical,
financial and moral terms. Thank you very much.

No Comments

Leave a Reply