GM, Ford and Toyota join to advance self-driving testing, standards

GM, Ford and Toyota join forces on AI consortium for self-driving cars to establish safety standards before putting autonomous vehicles on the roads

  • Companies joining forces with automotive engineering group SAE International
  • Group will begin with focus on data sharing, vehicle interaction, and safe testing 
  • It’s hoped that this will help inform standards development and regulation in US
  • e-mail

View
comments

Three major automakers said on Wednesday they were forming a consortium to help draw up safety standards for self-driving cars that could eventually help create regulations in the United States.

General Motors Co, Ford Motor Co and Toyota Motor Corp said in a statement they were joining forces with automotive engineering group SAE International to establish autonomous vehicle ‘safety guiding principles to help inform standards development.’

The group will also ‘work to safely advance testing, pre-competitive development and deployment,’ they added.


General Motors Co, Ford Motor Co and Toyota Motor Corp said in a statement they were joining forces with automotive engineering group SAE International to establish autonomous vehicle ‘safety guiding principles to help inform standards development.’ File photo

Regulators in the United States have been grappling with how to regulate self-driving cars, with other countries watching closely to see how implementation of the emerging technology pans out.

Last year, U.S. lawmakers, unable to agree on a way forward, abandoned a bid to pass sweeping legislation to speed the introduction of vehicles without steering wheels and human controls onto roads, but may resurrect the effort later this year.

The new group, dubbed the Automated Vehicle Safety Consortium, will begin by deciding priorities, with a focus on data sharing, vehicle interaction with other road users and safe testing guidelines.

Randy Visintainer, chief technology officer at Ford’s Autonomous Vehicles unit, said the goal was to work with companies and government ‘to expedite development of standards that can lead to rule making.’

Last month, the National Highway Traffic Safety Administration asked the public if robotic cars should be allowed on streets without steering wheels or brake pedals as they try to set the first legal boundaries for their design. 

  • High speed broadband has fuelled a baby boom among middle… YouTube bosses deliberately IGNORED requests from more than… The end of annoying WhatsApp groups: Update stops people… NASA is working on a Transformer-style PLANE that can change…

Share this article

NHTSA’s existing rules prohibit vehicles without human controls.

The regulator will for the first time compare a vehicle in which all driving decisions are made by a computer versus a human driver.

Concerns are mounting about automated piloting systems.

A fatal 2018 accident involving a self-driving vehicle operated by Uber Technologies Inc and two deadly plane crashes involving highly automated Boeing 737 MAX airliners have put a spotlight on the ability of regulators to assess the safety of advanced systems that substitute machine intelligence for human judgment.

The new consortium cited as a successful model a standards group that helped create a collection of some 4,500 aerospace standards covering airframe, engine and other aircraft parts.

HOW DO SELF-DRIVING CARS ‘SEE’?

Self-driving cars often use a combination of normal two-dimensional cameras and depth-sensing ‘LiDAR’ units to recognise the world around them.

However, others make use of visible light cameras that capture imagery of the roads and streets. 

They are trained with a wealth of information and vast databases of hundreds of thousands of clips which are processed using artificial intelligence to accurately identify people, signs and hazards.   

In LiDAR (light detection and ranging) scanning – which is used by Waymo – one or more lasers send out short pulses, which bounce back when they hit an obstacle.

These sensors constantly scan the surrounding areas looking for information, acting as the ‘eyes’ of the car.

While the units supply depth information, their low resolution makes it hard to detect small, faraway objects without help from a normal camera linked to it in real time.

In November last year Apple revealed details of its driverless car system that uses lasers to detect pedestrians and cyclists from a distance.

The Apple researchers said they were able to get ‘highly encouraging results’ in spotting pedestrians and cyclists with just LiDAR data.

They also wrote they were able to beat other approaches for detecting three-dimensional objects that use only LiDAR.

Other self-driving cars generally rely on a combination of cameras, sensors and lasers. 

An example is Volvo’s self driving cars that rely on around 28 cameras, sensors and lasers.

A network of computers process information, which together with GPS, generates a real-time map of moving and stationary objects in the environment.

Twelve ultrasonic sensors around the car are used to identify objects close to the vehicle and support autonomous drive at low speeds.

A wave radar and camera placed on the windscreen reads traffic signs and the road’s curvature and can detect objects on the road such as other road users.

Four radars behind the front and rear bumpers also locate objects.

Two long-range radars on the bumper are used to detect fast-moving vehicles approaching from far behind, which is useful on motorways.

Four cameras – two on the wing mirrors, one on the grille and one on the rear bumper – monitor objects in close proximity to the vehicle and lane markings. 

Source: Read Full Article