Articles, Blog

Robotics: Crash Course AI #11

November 3, 2019


John-Green-bot: Hi, I’m John-Green Bot and
welcome to Crash Course AI! Today we’re learning about meeee!!!! Jabril: Hey! This is my show! John-Green-bot: Uh oh… Jabril: It’s ok John Green Bot, we can do
this intro together. Robotics is a broad topic, because it’s
the science of building a computer that moves and interacts with the world (or even beyond
the world, in space). John Green Bot: So today we’re going to
talk about robots, like me, and what makes us tick! INTRO Some of the most exciting AIs are robots that
move through the world with us, gathering data and taking actions! Robots can have wings to fly, fins to swim,
wheels to drive, or legs to walk. And they can explore environments that humans
can’t even survive in. But, unlike humans, who can do many different
things. Robots are built to perform specific tasks,
with different requirements for hardware and for learning. Curiosity is a pretty amazing robot who spent
7 years exploring Mars for us, but it wouldn’t be able to build cars like industrial robots
or clean your apartment like a Roomba. Robotics is such a huge topic that it’s
also part of Computer Science, Engineering, and other fields. In fact, this is the third Crash Course video
we’ve made about robots! In the field of AI, robotics is full of huge
challenges. In some cases, what’s easy for computers
(like doing millions of computations per second) is hard for humans. But with robotics, what’s easy for humans,
like making sense of a bunch of diverse data and complex environments, is really hard
for computers. Like, for example, in the reinforcement learning
episode, we talked about walking, and how hard it would be to precisely describe all
the joints and small movements involved in a single step. But if we’re going to build robots to explore
the stars or, get me a snack, we have to figure out all those details, from how to build an
arm to how to use it to grab things. So we’re going to focus on three core problems
in robotics: Localization, Planning and Manipulation. The most basic feature of a robot is that
it interacts with the world. To do that, it needs to know where it is (which
is localization) and how to get somewhere else (which is planning). So localization and planning go hand-in-hand. We humans do localization and planning all
the time. Let’s say you go to a new mall and you want
to find some shoes. What do you do? You start to build a map of the mall in your
head by looking around at all the walls, escalators, shops, and doors. As you move around, you can update your mental
map and keep track of how you got there — that’s localization. And once you have a mental map and know the
way to the shoe store, you can get there more quickly next time. For example, you can plan that the escalator
is faster than the elevator. The most common way we input that data is
with our eyes through perception. Our two slightly different views of the world
allows us to see how far away objects are in space. This is called stereoscopic vision. And this mental map is the key to what many
robots do too, if they need to move around the world. As they explore, they need to simultaneously
track their position and update their mental map of what they see. This process is called Simultaneous Localization
And Mapping, which goes by the cool nickname SLAM. But instead of eyes, robots use all kinds
of different cameras. Many robots use RGB cameras for perception,
which gather color images of the world. Some robots, like John-Green-bot, use two
cameras to achieve stereoscopic vision like us! But robots can /also/ have sensors to help
them see the world in ways that humans can’t. One example is infrared depth cameras. These cameras measure distances by shooting
out infrared light (which is invisible to our eyes) and then seeing how long it takes
to bounce back. Infrared depth cameras are how some video
game motion sensors work, like how the Microsoft Kinect could figure out where a player is
and how they’re gesturing. This is also a part of how many self-driving
cars work, using a technology called LiDAR, which emits over 100,000 laser pulses a second
and measures when they bounce back. This generates a map of the world that marks
out flat surfaces and the rough placement of 3D objects, like a streetlamp, a mailbox,
or a tree on the side of the road. Once robots know how close or far away things
are, they can build maps of what they think the world looks like, and navigate around
objects more safely. With each observation and by keeping track
of its own path, a robot can update its mental map. But just keep in mind, most environments change,
and no sensor is perfect. So a lot goes into localization, but after
a robot learns about the world, it can plan paths to navigate through it. Planning is when an AI strings together a
sequence of events to achieve some goal, and this is where robotics can tie into Symbolic
AI from last episode. For example, let’s say John Green Bot had
been trained to learn a map of this office, and I wanted him to grab me a snack from the
kitchen. He has localization covered, and now it’s
time to plan. To plan, we need to define actions, or things
that John-Green-bot can do. Actions require preconditions, or how objects
currently exist in the world. And actions have effects on those objects
in the world to change how they exist. So if John Green Bot’s mental map has a
door between his current location and the kitchen, he might want to use an “open door”
action to go through it. This action requires a precondition of the
door being closed, and the effect is that the door will be open so that John Green Bot
can go through it. John Green Bot’s AI would need to consider
different possible sequences of actions (including their preconditions and effects) to reason
through all the routes to the kitchen in this building and choose one to take. Searching through all these possibilities
can be really challenging, and there are lots of different approaches we can use to help
AIs plan, but that would deserve a video on its own. Anyway, during planning we run into the third
core problem of robotics: manipulation. What can John Green Bot’s mechanical parts
actually do? Can he actually reach out his arms and interact
with objects in the world? Many humans can become great at manipulating
things (and I’m talking about objects, not that force powers stuff). Like, for example, I can do this but it took
me a while to get good at it. Just look at babies, they’re really clumsy
by comparison. Two traits that help us with manipulation,
and can help our robots, are proprioception and closed loop control. Proprioception is how we know where our body
is and how it’s moving, even if we can’t see our limbs. Let’s try an experiment: I’m going to
close my eyes, stretch my arms out wide, and point with both hands. Now, I’m going to try to touch my index
fingers without looking. Almost perfect! And I wasn’t way off because of proprioception. Our nervous system and muscles help our body’s
sense of proprioception. But most robots have motors and need sensors
to figure out if their machine parts are moving and how quickly. The second piece of the puzzle is closed loop
control or control with feedback. The “loop” we’re talking about involves
the sensors that perceive what’s going on, and whatever mechanical pieces control what’s
going on. If I tried that experiment again with my eyes
open instead of closed, it would go even better. As my fingers get closer to each other, I
can see their positions and make tiny adjustments. I use my eyes to perceive, and I control my
arms and fingers with my muscles, and there’s a closed loop between them — they’re all
part of my body and connected to my brain. It’d be a totally different problem if there
was an open loop or control without feedback, like, for example, if I closed my eyes and
tried to touch my finger to someone else’s. My brain can’t perceive with their eyes
or control their muscles, so I don’t get any feedback and basically have to keep doing
whatever I start doing. We use closed loop control in lots of situations
without even thinking about it. If a box we’re picking up is heavier than
expected, we feel it pull the skin on our fingers or arms so we tighten our grip. If it’s EVEN heavier than expected, we’d
might try & involve our other hand, & if its too heavy, well, we’ll call over my open-loop-example
buddy. But this process has to be programmed when
it comes to building robots. Manipulation can look different depending
on a robot’s hardware and programming. But with enough work we can get robots to
perform specific tasks like removing the cream from an oreo cookie. Beyond building capable robots that work on
their own, we also have to consider how robots interact and coordinate with other robots
and even humans. In fact, there’s a whole field of Human-Robot
Interaction that studies how to have robots work with or learn from humans. This means they have to understand our body
and spoken language commands. What’s so exciting about Robotics is that
it brings together every area of AI into one machine. And in the future, it could bring us super
powers, help with disabilities, and even make the world a little more convenient by delivering
snacks. John-Green-bot: Here you go, Jabril! Thanks, John-Green-bot… go get me a spoon. But… we’re still a long way from household
robots that can do all these things. And when we’re building and training robots,
we’re working in test spaces rather than the real world. For instance, a LOT of work gets done on self-driving
car AI, before it even gets close to a real road. We don’t want a flawed system to accidentally
hurt humans. These test spaces for AI can be anything from
warehouses, where robots can practice walking, to virtual mazes that can help an AI model
learn to navigate. In fact, some of the common virtual test spaces
are programmed for human entertainment: games. So next week, we’ll see how teaching AI
to play games (even games like chess) can help us solve real-world problems. See ya then. John Green Bot: Crash Course AI is produced in association with PBS Digital Studios. If you want to help keep Crash Course free
for everyone, forever, you can join our community on Patreon. And if you want to learn more about engineering robots check out this video.

50 Comments

  • Reply Dulce Vazquez October 25, 2019 at 8:40 pm

    ☺️☺️

  • Reply Dezik Petrov October 25, 2019 at 8:40 pm

    First Comment

  • Reply John Michael October 25, 2019 at 8:40 pm

    First!

    Never mind Dezik has that honor this time. Congrats good sir.

  • Reply DaBTEDI October 25, 2019 at 8:40 pm

    Second

  • Reply Jenny Kim October 25, 2019 at 8:41 pm

    Yay

  • Reply Tami’s World October 25, 2019 at 8:41 pm

    😌😌😌😂

  • Reply Tatyana Tatyana October 25, 2019 at 8:41 pm

    First comment 😁😁 love robotics

  • Reply Jenny Kim October 25, 2019 at 8:41 pm

    Let the comments flow in!!!!

  • Reply Avery the Cuban-American October 25, 2019 at 8:42 pm

    I've participated in FRC robotics for four years in software. It was a cool experience. I got to go to the world championship in Detroit a few months ago

  • Reply Lycan23beast October 25, 2019 at 8:52 pm

    Terminator..

  • Reply Максим Чех October 25, 2019 at 8:53 pm

    Great stuff, thank you very much. Robotics is the future.

  • Reply Dull Bananas October 25, 2019 at 8:54 pm

    When did that robot get smart

  • Reply s0So October 25, 2019 at 8:54 pm

    Where my technomancers at?

  • Reply Артур Богданов October 25, 2019 at 8:57 pm

    Спасибо
    Просто и понятно

  • Reply The Blazer October 25, 2019 at 8:59 pm

    This is my show Robot…
    Robot: UhHhHh OhHhHh

  • Reply Greg Hartwick October 25, 2019 at 9:09 pm

    Good job, Jabril.

  • Reply vrgajjala October 25, 2019 at 9:13 pm

    I didn't watch the first and second video, of this topic.

  • Reply Sz Fenek October 25, 2019 at 9:14 pm

    You've cheated us about weight of that box ! 😀

  • Reply DevHustle October 25, 2019 at 9:21 pm

    I'm a simple man… Crashcourse uploads a video…I watch the video.

  • Reply MASTER FAZE SAITAMA October 25, 2019 at 9:42 pm

    This video so confusing

  • Reply Abdulmajeed Abdulraheem October 25, 2019 at 9:56 pm

    Jabril!

  • Reply N October 25, 2019 at 10:02 pm

    I never thought that robotics could be so diverse

  • Reply Osxkarallmyfriendsaredead October 25, 2019 at 10:04 pm

    Hiiiiii I love crash course

  • Reply Porg liberation movement October 25, 2019 at 10:13 pm

    Lets see…. Skynet…. Cylons… AI taking over and killing all humans…. Sounds good!!!!

  • Reply Michael Pisciarino October 25, 2019 at 10:14 pm

    2:04 3 Core Problems
    1. Localization and planning (knowing where one is and where one is going)
    2:55 Stereoscopic Vision
    3:45 Infrared Depth Cameras
    4:19 Robotic Mental Map

    4:38 Planning
    5:55 Manipulation
    + Proprioception +Closed-Loop Control
    9:10 Test Bases ———–> The Real World

  • Reply Flaming Basketball Club October 25, 2019 at 10:27 pm

    👌👌👌👌👌👌👌

  • Reply Frederik Danielsen October 25, 2019 at 10:33 pm

    It seems to jump from AI#9 to AI#11

  • Reply tempodude October 25, 2019 at 10:38 pm

    4:17
    … tree?
    TREE?!
    TREEEEEEEEEE

  • Reply Wehttam Vonairibas October 25, 2019 at 11:19 pm

    Lets have an episode hosted only by John green bot

  • Reply Google Is A Cruel Mistress October 25, 2019 at 11:34 pm

    Is that BMO in the thumbnail… respect.

  • Reply - October 26, 2019 at 7:02 am

    6:21 – That's because babies are stupid; everybody knows that, well everybody except for babies, because they're stupid.

  • Reply kil98q October 26, 2019 at 7:18 am

    the infrared depth camera thing was wrong! It uses known patterns and looks a deformation!!!!

  • Reply The thing that Reads a lot October 26, 2019 at 11:09 am

    Why don’t you guys make an app ?????
    I feel like every school will require students to download it !

  • Reply Geoffrey Winn October 26, 2019 at 11:53 am

    Educational!

  • Reply RedXFeather October 26, 2019 at 1:29 pm

    Was… The Robot voiced by Michael from Vsauce?

  • Reply LeftPinkie October 26, 2019 at 3:29 pm

    Argh. I am so tired of everyone misusing the term AI… mostly for clickbait & because they don't know what it truly means. Not all robots are AI… far from it. You mentioned the word "programming" several times. If a system has to be programmed then by definition it is not AI. Just like human intelligence is not programmed down to every last possibility. Intelligence & learning are providing a framework or base data & having the system whether it be a human or a robot learn & grow above said framework. For example if give someone / something 1 widget a day & they inferred that they will have 7 widgets in a week… that is not intelligence, that's is just computation using another predefined framework, arithmetic. I have seen people attached AI to a toaster. A toaster was programmed to make toast. Yes I may be able to remember your preferences but it will never be able to outgrow its original programming. So please everybody stop throwing the letters AI around because it's the latest buzzword.

  • Reply Caleb October 26, 2019 at 3:44 pm

    Search Andrew Yang AI

  • Reply Mihail Stefanov October 26, 2019 at 9:56 pm

    I love this series, because I like how Jabril explains the topic. Keep it up! 🙂

  • Reply Sheela HTML5 October 27, 2019 at 1:14 am

    Subscribe to McAwesomeYT

  • Reply Adam Flanders October 27, 2019 at 7:36 am

    The infrared light technology is similar to how iPhones use face ID to map your face with thousands of tiny dots. Modern iPhones repeatedly flash your face with infrared light every few moments. I never noticed it until I saw the flashes on my face in an infrared security camera.

  • Reply Alisa Adler October 27, 2019 at 5:44 pm

    great, how creative people are and what a terrific things they can invent

  • Reply bacon banana October 27, 2019 at 11:26 pm

    Dude, make your own channel so I can subscribe just to you.

  • Reply Ilham Kusuma October 28, 2019 at 9:20 am

    walking is hard, human learn it a few months or even year, 24 hours per day. when a baby sitting they still learn how to make its backbones stay strong and stable for walking. dont compare newly running algorithm inside robot's brain in controlled environment and a lot of data a baby learnt in months. also computational structure and power also is very different. if u include the cost, that will much more difference.

  • Reply Onais Nadeem October 28, 2019 at 4:46 pm

    Chemistry is in my blood
    I love studying it

  • Reply Graebel October 28, 2019 at 8:56 pm

    There's a youtube channel called Codey Owens thats posting your videos without giving this channel credit. Thought you should know.

  • Reply Israel Juarez October 29, 2019 at 3:35 pm

    About 10 million

  • Reply Kent Bouchard October 30, 2019 at 12:49 am

    JABRILS IS IN THE VIDEO, YES THEY KNOW INDEPENDENT GAME DEVELOPERS.

  • Reply Psi academy jee October 30, 2019 at 2:58 pm

    For physics vedios free vedios see psi academy jee Sebastian sir

  • Reply CCRLH85 October 30, 2019 at 10:16 pm

    So, has anyone else here ever coded an autonomous robot that accidently caused personal injury?

  • Reply Ridwan Abrar November 1, 2019 at 6:05 pm

    I wonder if he's the only one in Crash Course who talks slowly enough

  • Leave a Reply