Home Page ContentPress Releases Elon Musk joins tech leaders in signing open letter to ‘Pause Giant AI Experiments’ – Op-Ed from software expert and Founder of The Dawn Project

Elon Musk joins tech leaders in signing open letter to ‘Pause Giant AI Experiments’ – Op-Ed from software expert and Founder of The Dawn Project

by Anthony Weaver

On 29 March 2023 Elon Musk strongly endorsed an open letter that calls to “Pause Giant AI Experiments”, the letter was cosigned by numerous tech leaders, including Apple co-founder Steve Wozniak. Meanwhile Elon is pushing on with his own quest to conquer the automotive world with FSD AI.

Elon Musk has for years loftily claimed that Tesla’s Full Self-Driving (FSD) AI is the leading AI technology.  As such, he has tasked Tesla’s Full Self-Driving AI with the task of driving a car anywhere, anytime, under any weather conditions ten times better than an average human driver.

Elon Musk tweeted in 2021: “A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work, as the entire road system is designed for biological neural nets with optical imagers.”

The people who endorsed this petition are alarmed by the dangers posed by Advanced AI, such as the recently released upgraded iteration of ChatGPT.  But ChatGPT is just an app on your computer and its sole purpose is entertainment. It has no power over you. Most importantly, it is not armed with 400,000 2-ton killer robots.  But, Tesla Full Self-Driving AI is!

FSD Advanced AI controls the steering wheel, brakes, and accelerator of 400,000 cars, all the time, even when it is parked.  Tesla’s Full Self-Driving AI is the only AI operational today that has the power to kill millions of people.  FSD can communicate between all 400,000 FSD cars through the internet they are all connected to. The AI could self-drive every FSD Tesla out of their current parking spot to the nearest busy street, then accelerate to 100 mph, and swerve into oncoming traffic.

There would be no warning.  No one would know what was happening. No one would know what to do. In ten minutes it would all be over, with millions dead and every road impassible.

The open letter signed by Elon Musk notes that:

“The appropriate level of planning and management is not happening for Advanced AI systems.”

Elon Musk personally manages the FSD/Autopilot team along with two public companies and three private companies all with multibillion dollar market caps.  He obviously can’t devote much time to FSD. His AI leader, Andrej Karpathy, who designed the FSD AI left Tesla last year after a four-month sabbatical and returned to OpenAI, the developer of ChatGPT.  Ashok Elluswamy took over management of the FSD/Autopilot software team but has subsequently been dispatched to Twitter to fix the mess over there.

If you were the engineering manager for a full self-driving car project would you schedule recognizing and obeying Do Not Enter, Road Closed, and No Right Turn on Red signs before or after making a full release to 400,000 customers? Currently FSD doesn’t recognize most common traffics signs. Only Tesla engineering would schedule the recognition of common traffic signs after the full product release of a Full Self-Driving car.

Many engineers on the team have objected strongly to Elon Musk’s ridiculous promises that they know are impossible, a number of whom have gone so far as to quit the company because of these concerns.  FSD/Autopilot’s management is in chaos.

Whatever level of planning exists in the FSD/Autopilot group it is ineffective.  For nine years, Elon Musk has promised delivery of a self-driving systems that would drive better than a person within a year but has repeatedly failed to come good on his pledge. Instead, he shifts his focus and his plans every time he thinks of another cool feature that is beyond FSD/Autopilot’s ability to deliver.

“AI labs are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Tesla is locked into this out-of-control race against companies like Google to develop and deploy autonomous vehicles in order to corner the trillion-dollar robotaxi market. Elon Musk is cutting corners on safety to reduce costs.

Elon Musk has recklessly pushed out to customers version after version of FSD that Tesla knows has hundreds of critical safety defects in order to mollify customers complaints about not having gotten anything for their money.

The FSD AI is seriously deranged, and completely unpredictable.  It fairly frequently tries to kill people. For no apparent reason, FSD will suddenly turn the steering wheel left and swerve into oncoming traffic.  It appears to be suicidal.  How that came about no one knows.  It has tried to kill me twice in just a few hours of riding in a FSD Tesla.  Following one of incidents I analyzed the frame-by-frame video to confirm that it tried to cross the yellow line and crash into an oncoming car when it knew exactly where my car and the other car were, as well as their direction and speed.  It would have engineered a head on collision had my driver not grabbed the steering wheel in time.

Even FSD’s greatest proponents agree that it will randomly try to kill you.

“Decisions must not be delegated to unelected tech leaders.”

Elon, the ultimate unelected tech leader, is driving all the decisions for the FSD AI, frequently overruling engineers who protest that what he orders them to do is unsafe.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.”

At this time, there is no other AI system anywhere near the magnitude of FSD AI’s potential effects, so it should require the highest confidence that it will have positive effects.  Yet all of Elon Musk’s confident predictions that FSD will be a better driver than the average person in less than a year for the last nine years are never realized. He promised he would do a cross country trip without touching the controls in 2017. He said one million robotaxis would be on the road by 2020.  He said a Model 3 will last for one million miles with minimal maintenance. He said that a Model 3 would not be a depreciating asset and that by 2020 the Model 3 you bought in 2019 would be worth $200,000!

“Now is the time to get independent review before starting to train future systems.”

There are no independent reviews of anything in FSD.  It is always full steam ahead we will fix it later.

“All AI labs must immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors.”

Does this include Elon Musk? Is he going to shutdown FSD/Autopilot training for at least six months?

“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Elon Musk says the government should step in and institute a moratorium on FSD if he doesn’t immediately pause development implementation of his FSD AI project for six months.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.  These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

Now Elon Musk acknowledges that the FSD AI needs vastly more testing by independent outside experts, such as the Dawn Project.  Current FSD testing is completely inadequate.

Tesla also ignores the tsunami of bugs that the testers find including running over children in crosswalks, passing school buses with red lights flashing, speeding through school zones, going straight out of left & right turn lanes, running red lights and stop signs, ad infinitum. Instead of taking swift remedial action, Tesla has left these critical safety defects in the FSD AI unfixed for six months while the FSD engineers work instead on parking assist, the Optimus humanoid robot, and fixing the Twitter code base.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Tesla is focused is on reducing costs, implementing cool features, increasing performance, and suppressing negative PR by any means necessary.  These all come before accuracy, safety, interpretability, transparence, robustness, alignment, trustworthiness, or loyalty (except to Elon Musk).

“New and capable regulatory authorities must be dedicated to AI.” And “A robust auditing and certification ecosystem.”

Elon Musk ignores regulations and snubs his nose at regulatory authorities at every company for every product in every field. Tesla refuses to report miles per disengagement to the State of California, by claiming FSD is not a self-driving car. Is he now committing to respect the regulatory authorities?.

“Liability for AI-caused harm”

Tesla puts all the fault and liability for any accident on the driver when FSD AI is in control.

Musk can no longer get away with his hypocritical stance towards AI. Since he demands oversight of others’ AI projects, he must follow the same stringent rules himself when it comes to FSD. FSD’s AI has the potential to kill and maim millions of people, so he should stop worrying about a harmless text producing app like ChatGPT and start sticking to the age-old adage “Physician, heal thyself”.

Now for the big question: “Do the rules that Elon proposed in the open letter apply to him, or just everybody else?”

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More