Why robot? Why would a robot look like a human? What kind of work exactly will it be?

Why robot? Why would a robot look like a human? What kind of work exactly will it be?

Why are scientists and AI researchers arguing about whether we need robots? How realistic is the scenario in which artificial intelligence would want (and be able) to destroy humanity? When will a robot take over my workplace?

Here is an excerpt from the book by writer and journalist of The New York Times John Markoff, “Homo Roboticus? People and machines in search of mutual understanding”, which will help you understand everything.

The end of the era of printers

On a late spring evening in 1992 at Grand Central Station, an elderly man wearing a blue New York Times windbreaker waited on the platform for a train bound for Westchester County. I worked at the Times for a while and became interested in the shadowy figure. “Are you a newspaper employee?” - I asked.

As it turns out, many years ago he was a typesetter at the Times. In 1973, his union signed an agreement to gradually eliminate jobs as the company introduced computerized printing systems in exchange for job security until retirement. Although the man had not worked for over a decade, he still came to the Times Square printing plant and spent evenings with the remaining printers.

Printers and printers were highly skilled workers who were particularly affected by the advent of minicomputers in the 1970s and the sharp drop in their costs as they moved from transistors to integrated circuits. Today, the fate of the printer serves as a striking example of what is happening to living labor under the influence of the new wave of automation.

Where will AI go?

Today it is equally possible to include people in computer systems and to exclude them. Continued developments in both artificial intelligence and intelligence enhancement will force roboticists and computer scientists to decide what systems will look like in the workplace and in the world around us. Whether we like it or not, we will soon have to coexist with autonomous cars.

Software developer and consultant for Google's automotive project Brad Templeton once said, "A robot will become truly autonomous when you tell it to go to work and it decides to go to the beach." This is a great phrase that connects self-awareness with autonomy. Today, machines are beginning to operate without significant human intervention or at a level of independence that can be considered autonomy. This poses difficult questions for designers of intelligent machines. However, for the most part, engineers have ignored the ethical issues that arise when using computer technology. Only occasionally does the artificial intelligence research community give in to foreboding.

Robots at war

At the 2013 Humanoids conference in Atlanta, which focused on the development and application of anthropomorphic robots, Georgia Tech roboticist Ronald Arkin gave an impassioned speech entitled "How NOT to Build a Terminator." He reminded the audience that to his famous three laws, Asimov later added the fundamental “zero” law of robotics, which states: “A robot cannot harm humanity or, by inaction, allow humanity to come to harm.”

Addressing more than 200 roboticists and artificial intelligence specialists from universities and companies, Arkin called for deeper thinking about the consequences of automation. “We all know that the competition is aimed at emergency situations under the motto of ‘search and destroy,’” he said sardonically, adding: “Sorry, I meant the motto was ‘search and rescue.’”

The line between robots acting as rescuers and as guards is already blurred, if it exists at all. Arkin showed clips from science fiction films, including James Cameron's 1984 "Terminator." Each featured evil robots performing tasks that DARPA set in its competitions: clearing debris, opening doors, breaking through walls, climbing stairs and driving cars. Developers can use these features either constructively or destructively, depending on their intentions. The audience laughed nervously, but Arkin did not let them relax. “I'm kidding,” he said, “but I want to show that the technologies you develop can be used for purposes you haven't even thought about.”

In the weapons field, the potential for unexpected consequences has long been a feature of so-called dual-use technologies, such as nuclear energy, which can be used both as a source of electricity and as a weapon. This is now increasingly true for robotics and .

These are dual-use technologies, not only in terms of their potential to be used as weapons, but also in their potential to enhance or replace human capabilities.

Today we are still "in the control loop" - machines that replace or expand the capabilities of people are developed by people who cannot shrug off responsibility for the consequences of their inventions. “If you want to create a Terminator, then just keep doing your thing without thinking about it and you will get just such a device,” Arkin said. “But the world around us cares about the consequences of what we create.”

Automation issues and problems have expanded beyond the technical community. In an unnoticed public report from the Pentagon, “The Role of Autonomy in Defense Systems,” the authors drew attention to the ethical problems of automating combat systems. The military is already directly confronted with the contradictions associated with autonomous systems like drones and approaching the brink, beyond which issues of life and death will no longer be decided by people. In one of his speeches, Arkin argued that, unlike humans, autonomous combat robots would not sense threats to their personal safety, and this could potentially reduce collateral damage and avoid war crimes.

Arkin, in addition, formulated a new set ethical issues. What if we have robots that have morality, but the enemy does not? There is no easy answer to this question. Indeed, increasingly intelligent and automated weapons technologies have sparked a new arms race. Adding low-cost intelligence to weapons systems threatens to change the balance of power between countries.

When Arkin finished speaking in the stately building of the Academy of Medicine in Atlanta, one of the first to respond was the director of the DARPA Robotics Challenge, Gill Pratt. He did not refute Arkin's point, but reiterated that robots are a dual-use technology: "It's very easy to criticize robots that are funded by the Department of Defense," he said. - It's very easy to draw a robot that looks like the Terminator, but since everything around us has a dual purpose, it doesn't change anything. If you're building a healthcare robot, you'll have to make it more autonomous than an emergency rescue robot."

Advanced technologies have always raised questions regarding dual-use. Nowadays, artificial intelligence and machine autonomy have led to a rethinking of this problem. Until now, dual-use technologies have directly required people to make ethical decisions about their use. The autonomy of machines either delays human ethical decision-making or eliminates it completely.

Are modern robots autonomous?

We already have examples of scientists and engineers in other fields thinking about the potential consequences of what they do, and many of them coming to the defense of humanity. In February 1975, for example, Nobel laureate Paul Berg called on the elite of the then new biotechnology to meet at the Asilomar Convention Center in Pacific Grove, California. At the time, recombinant DNA, made by adding new genes to the DNA of living organisms, was the latest advance. It simultaneously promised global progress in technology and new materials and opened up the terrible possibility of the unintentional destruction of humanity as a result of the emergence of new microorganisms. A meeting of scientists led to an extraordinary decision.

The group recommended that molecular biologists refrain from certain types of research and pause research to find ways to ensure safety. To monitor the industry, biotechnologists have created an independent committee at the National Institutes of Health. Within a decade, enough data had been collected to remove restrictions on research. This was a striking example of society's reasonable approach to assessing the consequences of scientific progress.

Following the example of biologists, a group of artificial intelligence researchers and roboticists also met in Asilomar in February 2009 to discuss developments in the industry. The meeting was convened by Microsoft researcher Eric Horwitz, president of the Association for the Advancement of Artificial Intelligence. In the previous five years, researchers in the field had been discussing two warning signs. One came from, who announced the relatively imminent appearance of computer superintelligence. Sun Microsystems founder Bill Joy also painted a bleak picture of artificial intelligence. He published an article in Wired magazine detailing a trio of technological threats: robotics, genetic engineering and nanotechnology. Joy believed that these areas of research posed a triple threat to human survival and saw no obvious solution.

The artificial intelligence researchers who met at Asilomar chose to proceed less cautiously than their biotechnological predecessors. A group of computer science and robotics luminaries, including Sebastian Thrun, Andrew Ng, Manuela Veloso and Oren Etzioni, current director of the Paul Allen Institute for Artificial Intelligence Research, generally dismissed the possibility of superintelligence that would surpass humans, as well as the suggestion that artificial intelligence could spontaneously appear on the Internet. They agreed that autonomous robots capable of killing were already being developed, but their report, which appeared towards the end of 2009, was rather quiet. Artificial intelligence has not yet reached the point where it becomes an immediate threat.

“At the 1975 meeting, there was talk of a moratorium on recombinant DNA research. The context of the meeting of the American Association for the Advancement of Artificial Intelligence was completely different. The field has been making fairly good, steady progress, but artificial intelligence researchers have openly expressed disappointment that progress has not been fast enough given current hopes and expectations,” wrote the authors of the final meeting report.

One way or another, five years later the issue of machine autonomy arose again. In 2013, when Google acquired British specialist DeepMind, roboticists were thought to be very close to creating fully autonomous robots. A tiny startup demonstrated a program that could play video games better than humans. Reports of the acquisition were accompanied by a statement that Google was creating an "ethics board" due to concerns about the technology's potential use and potential abuse.

One of the co-founders, Shane Legg, acknowledged that the technology could ultimately have negative consequences for humanity. “I think humanity will disappear, and technology will likely play a role in that.” For an artificial intelligence researcher who had just received hundreds of millions of dollars, this was an odd position to take. If someone believes that technology can destroy humanity, then for what purpose does he continue to develop it?

At the end of 2014, the meeting on artificial intelligence was repeated - a new group of researchers, funded by one of the founders of Skype, met in Puerto Rico to discuss the safety of research. Despite a new wave of warning signs from luminaries such as Elon Musk and Stephen Hawking, the open letter from participants did not contain the same call to action that was heard at the biotech meeting in Asilomar in 1975.

Given that DeepMind was acquired by Google, Legg's public philosophizing takes on special significance. Today, Google is the most striking example of the potential consequences of the development of AI and intelligence enhancement. Built on an algorithm that efficiently captures knowledge and then feeds it back to humans through the process of searching for information, Google is now busy building a robot empire. The company can create machines that replace people: drivers, delivery workers and electronics assemblers. It's unclear whether it will remain an "intelligence enhancement" company or focus on AI.

And again ethical questions

A new wave of concern about the potential threat from artificial intelligence and robotics is sparked by ethical issues depicted in the sci-fi film Blade Runner. At the beginning of the film, Detective Deckard meets Rachel, an employee of a company that produces robots (or replicants), and asks her how expensive a “moth” is. She suggests that he does not understand the value of the company's work. “Replicants are like any other machine,” Deckard replies. - They are either a blessing or a danger. If they are good, it is not my concern.”

How long will it be before Google's intelligent machines, based on technology from DeepMind and Google's robotics division, raise the same questions? Few films have had the cultural impact of Blade Runner. A total of seven of its versions have been released, one of which is the director’s version, and a sequel is currently being filmed. It tells the story of a retired Los Angeles detective who is called in 2019 to hunt down and destroy a group of artificially created creatures known as replicants. The replicants were intended to work off-planet, but returned illegally to Earth to force their creator to extend their limited lifespans. A modern-day Wizard of Oz, the film reflects the hopes and fears of a technologically savvy generation.

From the Tin Man receiving a heart and thus becoming somewhat human, to replicants so superior to humans that Deckard is ordered to destroy them, humanity's relationship to robots becomes the defining question of the era.

These "intelligent" machines may never become intelligent in the human sense or self-aware. That's not the point. Artificial intelligence is rapidly improving and is approaching a point where it will increasingly appear to be intelligence.

Released in December 2013, the film “Her” resonated widely with the public, most likely because millions of people already interact with personal assistants such as Apple's Siri. Interactions like those shown in the film have become commonplace. As computers get smaller and become embedded in everyday objects, we expect interactions with them to be smarter. Working on Siri while the project was still hidden from public view, Tom Gruber called the system “intelligence in the interface.” He thought he had succeeded in bridging the competing worlds of artificial intelligence and intelligence enhancement.

Indeed, the emergence of software-based intelligent assistants seems to hint at the convergence of such incompatible communities as developers of human-machine interaction systems and artificial intelligence researchers. Alan Kay, who pioneered the modern personal computer industry, said that by working on computer interfaces, he was working for the future, which would arrive in 10–15 years. Nicholas Negroponte, an early researcher in embedded media and speech interfaces, worked with a 25-30 year vision. Like Negroponte, Kay argues that the best computer interfaces are those that feel more like theater, and the best theater involves the audience so much in its world that people feel like they are part of it. This approach leads directly to interactive systems that function more like intelligent peers rather than computerized tools.

How will these computer avatars transform society? People already spend a significant portion of their waking hours interacting via computers with each other or with human-like machines in video games or in virtual systems from FAQbots to Siri. We use search engines even in everyday conversations with each other.

Machines already shape our daily lives

Will these intelligent avatars become our servants, assistants and colleagues or all at once? Or will we face a darker scenario in which they turn into our masters? Approaching robots and artificial intelligence from the point of view of social relationships may at first seem absurd. However, given our tendency to humanize machines, we will likely engage in social relationships with them as they become more autonomous.

This brings us to another big challenge: the risk of losing control of everyday decisions to increasingly complex algorithms. Not long ago, Silicon Valley venture capital veteran Randy Komisar was at a conference and listened to a talk describing Siri's competitor, Google Now. “People seem to be dying for some intelligence to tell them what to do,” he said. “What they should eat, who they should date, what parties they should go to.”

In his opinion, for today's young generation the world has turned upside down. Instead of using computers to gain freedom and the ability to think big, to form intimate relationships, to realize their individuality and creativity, young people are so hungry for direction that they are willing to hand over this responsibility to artificial intelligence in the cloud.

The contradiction inherent in the ideologies of artificial intelligence and intelligence enhancement first struck me when I realized that Engelbart and McCarthy were creating computer technology for completely different purposes. There was both duality and paradox in their approaches. And this is understandable - if you expand human capabilities with computer technology, you inevitably displace people. At the same time, choosing one side or another in a dispute is a matter of ethics, although it cannot be considered as a choice between black and white.

Of course, these technologies have limits. American AI researcher Terry Winograd says that the goals of using computer technologies - expanding the capabilities of people or replacing them - depend more on the nature of the economic system, rather than on the properties of the technologies themselves. In a capitalist economy, if artificial intelligence technologies can replace white collar and knowledge workers, then they will inevitably be used in this way. This is precisely the lesson for artificial intelligence researchers, roboticists and programmers who approach the systems of the future in such different ways. It is clear that Bill Joy's warning (“the future doesn't need us”) is only one possible outcome. It is no less obvious that the world transformed by these technologies should not turn into a disaster.

There is a simple answer to what began as a paradox for me. Resolving the contradiction between artificial intelligence and intelligence enhancement depends on the decisions of people - engineers and scientists who consciously chose an anthropocentric approach.

This is a problem with us as people and the world we create. This is not a machine problem.

Science fiction writers invented robots decades ago, but smart metal people never appeared on our streets. Many things stand in the way of turning your dreams into reality. Including the man himself

Non-universal helpers

Cute creatures made from the latest plastics and alloys, according to people, must do hard or boring work: going to the store, washing dishes, vacuuming, doing homework with children and talking to grandma about the weather. If necessary, they will take the receipts to the bank and take the owner to work.

Each of these actions in itself does not require much effort, but together they take a lot of time, so household robots must be universal.

“Today in laboratories there are robots that can solve several tasks in parallel, but, firstly, at each moment they are busy with only one of them, and secondly, they cannot independently choose which task to give preference to. Moreover, robots do not understand at all what not to do in a particular situation.”“explains senior lecturer at Birmingham School of Computer Science and artificial intelligence specialist Nick Hawes.

To vacuum an apartment, a robot needs one algorithm, to go to the store - another, and both of them must be registered in its electronic “brains”. A small change in parameters, if it was not specified initially, for example, food sections in a store have been swapped, makes the task impossible. The machine only executes preset commands and cannot “realize” that, in fact, everything in the store remains the same. “One solution to the problem is to create a kind of social network for robots, where they will upload data obtained in new situations, and other robots will be able to download it.”, says Nick.

Limited mind

Another trait that future writers attribute to robots, along with versatility, is fantastic intelligence. Since it was created IBM computer Deep Blue beat one of the greatest chess players on the planet, Garry Kasparov, many people think that machines have surpassed humans in terms of intelligence. Supercomputers and processors in mobile phones that perform thousands of operations per second reinforce this belief. But in reality people have nothing to fear.

Nao equipped with a processor Intel Atom, like simple netbooks

The mind of robots is limited by the so-called problem of meaning. “This is a huge problem in robotics, says Hoz. — Robots don’t understand what “flower” or “sky” or anything means. Even worse, people themselves don’t know what meaning is - they just understand it, that’s all.”. The machine can learn that an object with four legs with a seat and back is a chair, but the meaning of the concept “chair” is inaccessible to it. Therefore, a robot is unlikely to recognize a designer chair without legs and with a split back, despite the fact that a person will not have any problems with this.

“People create huge databases where they record all possible meanings of words. But this is only a partial solution: if what you are talking about is in the database, the robot will understand you. What if the word is not there? There is another approach where robots are taught meaning through experience. But again, they will only learn the meaning of those concepts that they have personally encountered.”, says Nick Hawes.

DIFFICULTIES
Almost, but not quite...

Anthropomorphism- an insidious thing. If a robot closely resembles a human, but some features are still different, people begin to feel disgusted. This phenomenon is called "uncanny valley" (uncanny valley ). The term was coined in 1970 by Japanese roboticist Masahiro Mori. Initially, the rejection reaction was explained by the peculiarities of the human psyche, but in 2009, scientists from Princeton showed that monkeys behave in exactly the same way. This means that the fear of seemingly the same, but slightly different creatures has serious evolutionary grounds. The brain perceives these differences as a sign of ill health and seeks to limit contact with a potentially dangerous object.

In the photo: Cute robots are very short - their height is 58 cm

Lack of desires

Perhaps most of all, people are afraid that one day robots will get tired of obeying humans and they will take over the world. The prospect is unlikely not only because robots do not understand the meaning of the words “take over” and “world.” A much more compelling reason is that so far engineers have not been able to give robots consciousness. This difficult-to-define concept gives people freedom of choice and desire, including world domination.

“We do not yet understand how consciousness is formed in people, which means we cannot reproduce it in robots. In my opinion, the point is how exactly the different parts of the brain are connected to each other. If we ever figure this out, we might be able to replicate the structure of the brain and give robots consciousness.”, Hoz believes.

PRACTICE
More is better

Many actions that do not require effort from a person are impossible for robots. Mechanical creatures have difficulty calculating the strength of their grip when they shake hands or take something fragile, they walk very poorly and cannot run at all. At the annual robofootball championship RoboCup players move at a speed of about 3 m/s (10.8 km/h), and the best football players have wheels or tracks instead of legs.

It is very difficult for bipedal robots to maintain balance; when walking, the processor calculates each step, determining exactly how to distribute the weight. The most stable in motion were robots with four limbs, for example, created by the company Boston Dynamics in collaboration with the Jet Propulsion Laboratory NASA"big dog", BigDog (on the picture). The creature on flexible paws can walk on flat ground, sand, snow and shallow bodies of water, climbs up and down mountains, and at the same time drags up to 150 kilograms of weight on its “back.” It's not so easy to knock it to the ground: in demo videos, engineers kick the robot with their feet, but it still remains on all fours.

Machines that do not understand the meaning of words and do not have consciousness will not be able to replace people where it is necessary to act outside the template, even if it is a complex one. For example, although robots do not know fear, they are not afraid of pain, can exist without oxygen and water, and withstand extreme temperatures - they make very bad astronauts. “The information that a rover takes three months to collect, a person would receive in three hours, explains Nick. “People from Earth look at the telemetry and send the device instructions on how many centimeters to travel, which stone to approach, which tool to use. A person would make all these decisions in a split second.". On average, a signal travels from Mars to Earth in about 15 minutes (and the same amount back), but communication is not always possible due to interference. Therefore, the “exhaust” from even a short human trip to Mars would be hundreds of times greater than several robotic missions, each of which lasted years. The record holder among Martian centenarians, the Opportunity rover, has traveled only 40 kilometers in more than 10 years on the Red Planet.

Yes, robots count well, they are strong, resilient and work without interruptions for sleep and food. But, paradoxically, machines will not emerge as universal assistants until they become more humane and acquire consciousness (or perhaps a soul).

Photo: Diomedia (x6), PAL Robotics SL (x2), DARPA

Articles appear periodically about how soon robots will replace people and leave them without work. Against this backdrop, materials are increasingly being published, the authors of which list professions that “will not die out” and invite people to take courses in writing a “correct resume.” Has technology really changed the availability of work for people that much?

Nothing new for a century and a half

I believe that in reality robotization is just another round of the technological revolution. This is a historical process that has no beginning or end.

Robots cause the greatest fear among citizens of economically developed countries: their high technological level allows them to automate production as much as possible. But global statistics show that people are not being replaced yet. Otherwise, the unemployment rate would have to rise, but exactly the opposite is true. According to the International Labor Organization, the unemployment rate in advanced economies will fall to a record 5.5% in 2018 for the first time since 2007 (though the report notes that overall global unemployment remains relatively high).

At the beginning of the 19th century, the driver and a small train crew replaced hundreds of coachmen working on intercity stagecoaches. At first glance, this is a tragedy for the coachmen. But employment in the transport sector at that time was only 2%, and at the moment it has grown to 5%. This always happens when an industry becomes more efficient and begins to fulfill its main purpose faster and better. Improving the quality of services always creates higher demand, and this ultimately only increases employment in this area, rather than reducing it.

People fear robotics as something new and dangerous. But it began in the 70s of the last century (if not earlier): the first microprocessor, the first personal computer, the first cellular telephone network. If you then told a bank teller that you can walk up to a box, insert a plastic card into it and get cash, he, most likely, would also think that his job is being threatened by a robot. But in fact, a fundamental change has occurred with the cashier himself: now he is engaged in service, and all mechanical work has been transferred to machines. Has this increased the quality of services? Undoubtedly.

An ATM is the automation of mechanical work, but neural networks are already a kind of intelligent system. Let’s say that neural networks, for example, will be able to completely autonomously draw up acts and payment documents, and at some level control financial flows within the company. Then, in essence, the accounting profession will die out. Perhaps, completely, or perhaps the specialization of a certain auditor who will control the robots will remain.

But when some jobs disappear, new ones appear in their place. The flow of employment from one sector to another has become faster over the past 50 years. And all that neural networks will essentially do is only speed up this process. The accounting profession may disappear, but customer support employees for conventional automated accounting with experience in corporate finance will be in demand for at least another 10-15 years, and then some other job will appear for them.

Change of profession

Robotization does not lead to an increase in unemployment. Work for people does not disappear, it just flows into areas where decision-making depends on the person.

This is the main trend of 2018. Employment will be redistributed between production sectors. In particular, the trend of redistribution of employment in agriculture in favor of jobs in the service sector continues. The number of jobs in production is slowly falling: people are needed less and less where mechanical labor is required; they either teach automation or control it.

A robot will not replace a person where it is necessary to build a system and make informed management decisions. The process of building systems is completely intelligent - a computer cannot create something like itself. An ATM can give you money, but it doesn’t decide how much: scoring based on social networks and previous expenses is carried out by artificial intelligence, but the final evaluative management decision remains with the banking analyst.

This is why machines will never completely take over the HR sector, much less the sales sector. There is always a set of criteria that have not yet been loaded into the machine. Big data is a system built and collected by a person that looks for relationships in data to make it easier for a person to manage it further.

What kind of work exactly will it be?

What is happening to the world is simply an acceleration of transactions. 200 years ago, a letter by mail took three weeks, but now in instant messengers you can exchange information in two minutes.

If mass unemployment does not await us, then where does the information noise come from? Remember social networks: an article about a changed world and a new reality is more likely to be shared than about the fact that essentially nothing has changed around us.

While the world is changing naturally. Employment flows from one sphere to another, and even in these conditions there will always be a place for ordinary, not highly intellectual work.

Businesses that develop unmanned control systems have created hundreds of thousands of jobs in India. These are huge data centers where employees help train systems every day and receive $200 a day for this. An image from the cameras of the test cars is displayed on the monitor, which the employee must name: a fire hydrant, a child, a tree: in fact, these people make decisions and are engaged in building an automated system. Although this is not the most intellectual work.

Even in an age when, as it seems to us, turbulent changes are taking place, unemployment will not engulf humanity. The world is really developing, automation is coming to production. But the person will remain to train or control the machine. The actions of automation, robots and all kinds of intelligent systems will always need our approval.

Naturally, we are not talking about industrial robots; they have firmly occupied their niche and are doing well. But “household” robots, robots for ordinary people, have somehow still not received any decent distribution.

At the same time, science fiction works, and, so to speak, the theoretical concept of robotics, painted us rosy pictures of the widespread use of robots. Robots were supposed to help people by taking on difficult or dreary work.

However, when it came to the actual development, the developers were faced with a lack of normal ideas for using robots. Despite the fact that from the technological side, everything is, in general, not bad, but where to apply this experience is not very clear, and those attempts that exist look unconvincing.

But everything is in order, and I’ll start, in fact, with the good things that exist.


Robot constructors

The first thing that probably comes to mind is robot designers. Lego mindstorms and things like that.
There are a lot of robot constructors and they usually consist of a set of parts for assembling a robot chassis, a controller in one form or another (“the robot’s brain”) and a set of various sensors.

With a little instruction and a little perseverance, you can assemble a small robot from these parts, although it will essentially be just a moving machine.

Using the capabilities of Lego, you can assemble simply crazy mechanisms. From hexapods to tanks with rocket launchers. From a mechanism for solving a Rubik's cube to a mechanical artist painting the Mona Lisa.

Everyone loves these kinds of construction sets, and Lego is especially good. However, there is one thing. This is by no means a product for ordinary people. These are products for engineers. Engineers, perhaps not by profession, but by vocation, but still engineers, that is, for those people who like to design mechanisms, and by definition there are not many of them, which means that these designers will not make robots something natural for society. Something natural, like, for example, computers in their time.

Robot vacuum cleaners

Probably the most widespread use of robots in everyday life at the moment is robot vacuum cleaners. Despite the fact that their cost decreases over time, they perform their function quite well, helping to clean the house.

Of course, they are not capable of completely cleaning for a person, but their use definitely improves the cleanliness of the house. And this is so far the most successful use of robots in everyday life, as evidenced by the sales of such robots; people buy them in the millions.

As a result, we have one truly mass product, and one product for training. And for now, that’s all robotics can boast of. The rest of the robots are not so impressive.

Robot dog

At one time, Sony released a funny robot dog, Aibo. Then it seemed that, finally, robots would come to every person’s home. However, as a result, these dogs did not show good sales, and it was not only because the price was not the lowest, now there are plenty of cheaper analogues, but their sales are also mediocre. The fact is that such robots are just an expensive toy, and such things do not have the best properties; they play with them for a week or two, and then throw them into the far corner, as unnecessary. That is why they did not become a mass product. They won't be used for a long time.

Android robots

Nao, Darwin OP and a number of others. On the one hand, they are used as educational ones, and then they perform in the same category as Lego mindstorms, and on the other hand, they are used for entertainment. In this case, they have the same problems as robot dogs. This is an expensive toy that you play with for a week and then throw it away.

Exhibition robots

Here from personal experience. I was at the Tomsk forum of young scientists U-novus, and there was a robot from the Moscow Institute of Technology. I won’t say anything about the robot itself, but since I wanted to confirm my theoretical guesses in practice, I spent quite a lot of time next to it, watching people who approached it.

And there were quite a lot of people interested! One problem, after playing with this robot for a couple of minutes, everyone lost interest in it.

This is the main problem with such “promotional bots”. They are disposable. They really attract attention, but only once, then no one cares about them anymore, which means that those tasks that are theoretically assigned to them as the best way to attract clients will be carried out, but only until everyone has had enough of these robots.

In fairness, it should be noted that they can do a pretty good job as guides or mobile information centers, but the question here is how many guides are needed or whether mobile information centers are needed when there are just signs with information...

Quadcopters

Let’s agree that we are not considering those copters that are controlled directly from remote controls, which means they are not robots.
We won't be left with many uses that have been invented so far.

Well, first of all, “expensive selfie stick.” A quadcopter flies behind its owner and films it on video. In general, the idea is not a bad one, it’s not that there are very many extreme sports enthusiasts, but the videos with them, which are obtained from copters, always look simply amazing. However, there is one thing, which I will talk about a little later. Well, besides, this is mainly for extreme sports enthusiasts, or at least for tourists, so there’s probably no need to talk about mass appeal here.

Secondly, this is cargo delivery. From small drones in a restaurant, to transporting pizza or other small packages. Well, drones in restaurants are immediately a bad idea; anyone who has seen lacerations from quadcopter blades will immediately understand the horror of using them in enclosed spaces.

Cargo transportation is already better. If we remove the risk that young vandals will happily start hunting for such things, then we get an excellent alternative to “human” couriers. Fast delivery, independence from traffic jams, and overall low cost and ultimately an excellent product! Would be…

It would be, if not for one fundamental problem facing not only the robot industry, but also, in general, humanity. And the problem is the low capacity of modern batteries. All current quadcopters can fly for about 30 minutes. Some manage to stretch the flight time to a whole hour, but no more. How much can you do in half an hour? Not good.

For those extreme sports enthusiasts, what threatens the copter when this half hour passes? It will fall somewhere in a cave/water/volcano/whatever, and this is the loss of not the cheapest copter and certainly an expensive camera. Cargo transportation also doesn’t seem very realistic, you either transport one parcel and then sit and charge, or you risk dropping some china on the heads of street passers-by.

Here, however, it should be clarified that large companies are trying to get around this limitation, for example, by creating stations for recharging drones, in which the copter will either recharge and continue to transport the package, or transfer it to another drone that is already charged.

But these are all half measures, and until they can fundamentally improve the battery capacity, drones are unlikely to be able to perform their functions as intended.

Other

There are several other types of robots, telepresence robots, nanny robots, teacher robots, etc. But some of them are, in principle, of little use, while others are still at too early a stage of development to talk about their success. Well, in any case, these robots, even if successful, will occupy a very narrow niche.

If you forgot something or are a fool and don’t understand anything, write in the comments.

Bottom line

But the end result is a rather sad situation. With rare exceptions, people have not yet figured out why exactly they need to use robots, so that they firmly enter the lives of ordinary people and become the same natural “human companion” as, for example, computers.

P.S. I didn’t consider autopilots for cars, since they are a little out of the standard “look” of robots

Android models at the Tokyo Game Show. 2017 Photo: Kim Kyung Hoon/Reuters

California authorities have signed a law that prohibits robots, bots and neural networks from pretending to be people. It will come into force on July 1, 2019.

As State Governor Jerry Brown explained, the law will primarily affect automated bots on social networks, which companies use for commercial purposes, for example, selling goods or services. Now bot creators must configure them so that they immediately declare their artificial origin and do not mislead people. This also applies to voice assistants that talk on the phone on behalf of the owner.

The head of the Research Center for Regulation of Robotics and Artificial Intelligence, Andrei Neznamov, speaks about the importance of such a law:

Andrey Neznamov Head of the Research Center for Regulation of Robotics and Artificial Intelligence“There is such a need. Maybe the United States is now a little ahead of the real pressing need, but it is clear that we will come to this sooner or later. When Google announced its voice assistant a couple of months ago, it demonstrated calls to a restaurant or a hair salon. This call was made by a voice assistant, and no one on the other side realized that a robot was talking to him. Therefore, most likely, this was a formal reason why such a law appeared. Everyone was really amazed by this. At first people were happy, and then they got scared. After all, if the system on the other side, for example, automatically processes information, then by communicating with it, you trust it, for example, with your personal data. Secondly, a person can be misled about what a robot or artificial intelligence has told him and, based on this, make some decision or take some action that, if he knew that it was a robot or artificial intelligence, intelligence, he would not do it. For example, a stock exchange: it is possible that for some reason a person would not trust a robot, but would prefer financial advice from a stock exchange person.”



© 2023 globusks.ru - Car repair and maintenance for beginners