Importance Of AI Safety Being Smartly Illuminated Amid Latest Trends Showcased At Stanford AI Safety Workshop Encompassing Autonomous Systems

Byev3v4hn

Jul 22, 2022 #"Aha Automotive Technologies", #1940 Bols Automotive Art, #3 Brothers From Indiana Automotive, #Airbag Automotive Recall, #Atlantic And Pacific Automotive, #Atlas Automotive Mesa, #Autobahn Automotive Service San Antonio, #Automotive 120v Socket, #Automotive Alignment Chart, #Automotive Bench Puller, #Automotive Consumer Information Sign, #Automotive Detailing Store, #Automotive Dip Stripping Georgia, #Automotive Direct Mail Solutions, #Automotive Educator News, #Automotive Employee Handbook Policy, #Automotive Industry In Europe 2020, #Automotive Mechanics Accidents, #Automotive Repair Award, #Automotive Smart Phone Controls, #Automotive Sound Deadening Fabric, #Automotive Technician (Toyota Focused, #Automotive Testing Machine, #Automotive Trade School Denver, #Automotive Videos With G7x, #autonomous, #Az, #Barometric Pressure Automotive, #Bolt Clip Automotive, #Casey Automotive Irving, #Cliff Findlay Findlay Automotive Group, #Cool Automotive Jobs, #Custom Button Decal Automotive, #Encompassing, #Forsman Automotive Watertown, #Gallatin Tn Automotive Companies, #Gemini Automotive Lift, #History Of Automotive Engines Amazon, #Illuminated, #Importance, #Industrial Op-Amp Automotive, #Koeppel Automotive Group', #Latest, #Lighting -*Automotive, #Mhs Automotive Hoodi, #Normal Automotive Voltage, #Reidsville Automotive Reidsville Nc, #S&P Index Automotive, #safety, #Showcased, #Sks Automotive Orlando Fl, #Smartly, #stanford, #systems, #Tennessee Automotive Education, #trends, #Tstc Automotive Program, #Wafering Kit Automotive, #Warn Automotive Llc, #Wholesale Automotive Oil Suppliers, #Winged Automotive Enblems, #workshop, #World'S Largest Automotive Dealership

AI safety is vital.

You would be hard-pressed to seemingly argue otherwise.

As readers of my columns know well, I have time and again emphasized the importance of AI safety, see the link here. I typically bring up AI safety in the context of autonomous systems, such as autonomous vehicles including self-driving cars, plus amidst other robotic systems. Doing so highlights the potential life-or-death ramifications that AI safety imbues.

Given the widespread and nearly frenetic pace of AI adoption worldwide, we are facing a potential nightmare if suitable AI safety precautions are not firmly established and regularly put into active practice. In a sense, society is a veritable sitting duck as a result of today’s torrents of AI that poorly enact AI safety including at times outright omitting sufficient AI safety measures and facilities.

Sadly, scarily, attention to AI safety is not anywhere as paramount and widespread as it needs to be.

In my coverage, I have emphasized that there is a multitude of dimensions underlying AI safety. There are technological facets. There are the business and commercial aspects. There are legal and ethical elements. And so on. All of these qualities are interrelated. Companies need to realize the value of investing in AI safety. Our laws and ethical mores need to inform and promulgate AI safety considerations. And the technology to aid and bolster the adoption of AI safety precepts and practices must be both adopted and further advanced to attain greater and greater AI safety capabilities.

When it comes to AI safety, there is never a moment to rest. We need to keep pushing ahead. Indeed, please be fully aware that this is not a one-and-done circumstance but instead a continual and ever-present pursuit that is nearly endless in always aiming to improve.

I’d like to lay out for you a bit of the AI safety landscape and then share with you some key findings and crucial insights gleaned from a recent event covering the latest in AI safety. This was an event last week by the Stanford Center for AI Safety and took place as an all-day AI Safety Workshop on July 12, 2022, at the Stanford University campus. Kudos to Dr. Anthony Corso, Executive Director of the Stanford Center for AI Safety, and the team there for putting together an excellent event. For information about the Stanford Center for AI Safety, also known as “SAFE”, see the link here.

First, before diving into the Workshop results, let’s do a cursory landscape overview.

To illustrate how AI safety is increasingly surfacing as a vital concern, let me quote from a new policy paper released just earlier this week by the UK Governmental Office for Artificial Intelligence entitled Establishing a Pro-innovation Approach to Regulating AI that included these remarks about AI safety: “The breadth of uses for AI can include functions that have a significant impact on safety – and while this risk is more apparent in certain sectors such as healthcare or critical infrastructure, there is the potential for previously unforeseen safety implications to materialize in other areas. As such, whilst safety will be a core consideration for some regulators, it will be important for all regulators to take a context-based approach in assessing the likelihood that AI could pose a risk to safety in their sector or domain, and take a proportionate approach to manage this risk.”

The cited policy paper goes on to call for new ways of thinking about AI safety and strongly advocates new approaches for AI safety. This includes boosting our technological prowess encompassing AI safety considerations and embodiment throughout the entirety of the AI devising lifecycle, among all stages of AI design, development, and deployment efforts. I will next week in my columns be covering more details about this latest proposed AI regulatory draft. For my prior and ongoing coverage of the somewhat akin drafts regarding legal oversight and governance of AI, such as the USA Algorithmic Accountability Act (AAA) and the EU AI Act (AIA), see the link here and the link here, for example.

When thinking mindfully about AI safety, a fundamental coinage is the role of measurement.

You see, there is a famous generic saying that you might have heard in a variety of contexts, namely that you cannot manage that for which you don’t measure. AI safety is something that needs to be measured. It needs to be measurable. Without any semblance of suitable measurement, the question of whether AI safety is being abided by or not becomes little more than a vacuous argument of shall we say unprovable contentions.

Sit down for this next point.

Turns out that few today are actively measuring their AI safety and often do little more than a wink-wink that of course, their AI systems are embodying AI safety components. Flimsy approaches are being used. Weakness and vulnerabilities abound. There is a decided lack of training on AI safety. Tools for AI safety are generally sparse or arcane. Leadership in business and government is often unaware of and underappreciates the significance of AI safety.

Admittedly, that blindness and indifferent attention occur until an AI system goes terribly astray, similar to when an earthquake hits and all of a sudden people have their eyes opened that they should have been preparing for and readied to withstand the shocking occurrence. At that juncture, in the case of AI that has gone grossly amiss, there is frequently a madcap rush to jump onto the AI safety bandwagon, but the impetus and consideration gradually diminish over time, and just like those earthquakes is only rejuvenated upon another big shocker.

When I was a professor at the University of Southern California (USC) and executive director of a pioneering AI laboratory at USC, we often leveraged the earthquake analogy since the prevalence of earthquakes in California was abundantly understood. The analogy aptly made the on-again-off-again adoption of AI safety a more readily realized unsuitable and disjointed way of getting things done. Today, I serve as a Stanford Fellow and in addition serve on AI standards and AI governance committees for international and national entities such as the WEF, UN, IEEE, NIST, and others. Outside of those activities, I recently served as a top executive at a major Venture Capital (VC) firm and today serve as a mentor to AI startups and as a pitch judge at AI startup competitions. I mention these aspects as background for why I am distinctly passionate about the vital nature of AI safety and the role of AI safety in the future of AI and society, along with the need to see much more investment into AI safety-related startups and related research endeavors.

All told, to get the most out of AI safety, companies and other entities such as governments need to embrace AI safety and then enduringly stay the course. Steady the ship. And keep the ship in top shipshape.

Let’s lighten the mood and consider my favorite talking points that I use when trying to convey the status of AI safety in contemporary times.

I have my own set of AI safety levels of adoption that I like to use from time to time. The idea is to readily characterize the degree or magnitude of AI safety that is being adhered to or perhaps skirted by a given AI system, especially an autonomous system. This is just a quick means to saliently identify and label the seriousness and commitment being made to AI safety in a particular instance of interest.

I’ll briefly cover my AI safety levels of adoption and then we’ll be ready to switch to exploring the recent Workshop and its related insights.

My scale goes from the highest or topmost of AI safety and then winds its way down to the lowest or worst most of AI safety. I find it handy to number the levels and ergo the topmost is considered as rated 1st, while the least is ranked as last or 7th. You are not to assume that there is a linear steady distance between each of the levels thus keep in mind that the effort and degree of AI safety are often magnitudes greater or lesser depending upon where in the scale you are looking.

Here’s my scale of the levels of adoption regarding AI safety:

1) Verifiably Robust AI Safety (rigorously provable, formal, hardness, today this is rare)

2) Softly Robust AI Safety (partially provable, semi-formal, progressing toward fully)

3) Ad Hoc AI Safety (no consideration for provability, informal approach, highly prevalent today)

4) Lip-Service AI Safety (smattering, generally hollow, marginal, uncaring overall)

5) Falsehood AI Safety (appearance is meant to deceive, dangerous pretense)

6) Totally Omitted AI Safety (neglected entirely, zero attention, highly risk prone)

7) Unsafe AI Safety (role reversal, AI safety that is actually endangering, insidious)

Researchers are usually focused on the topmost part of the scale. They are seeking to mathematically and computationally come up with ways to devise and ensure provable AI safety. In the trenches of everyday practices of AI, regrettably Ad Hoc AI Safety tends to be the norm. Hopefully, over time and by motivation from all of the aforementioned dimensions (e.g., technological, business, legal, ethical, and so on), we can move the needle closer toward the rigor and formality that ought to be rooted foundationally in AI systems.

You might be somewhat taken aback by the categories or levels that are beneath the Ad Hoc AI Safety level.

Yes, things can get pretty ugly in AI safety.

Some AI systems are crafted with a kind of lip-service approach to AI safety. There are AI safety elements sprinkled here or there in the AI that purport to be providing AI safety provisions, though it is all a smattering, generally hollow, marginal, and reflects a somewhat uncaring attitude. I do not want to though leave the impression that the AI developers or AI engineers are the sole culprits in being responsible for the lip-service landing. Business or governmental leaders that manage and oversee AI efforts can readily usurp any energy or proneness toward the potential costs and resource consumption needed for embodying AI safety.

In short, if those at the helm are not willing or are unaware of the importance of AI safety, this is the veritable kiss of death for anyone else wishing to get AI safety into the game.

I don’t want to seem like a downer but we have even worse levels beneath the lip-service classification. In some AI systems, AI safety is put into place as a form of falsehood, intentionally meant to deceive others into believing that AI safety embodiments are implanted and actively working. As you might expect, this is rife for dangerous results since others are bound to assume that AI safety exists when it in fact does not. Huge legal and ethical ramifications are like a ticking time bomb in these instances.

Perhaps nearly equally unsettling is the entire lack of AI safety all told, the Totally Omitted AI Safety category. It is hard to say which is worse, falsehood AI safety that maybe provides a smidgeon of AI safety despite that it overall falsely represents AI safety or the absolute emptiness of AI safety altogether. You might consider this to be the battle between the lesser of two evils.

The last of the categories is really chilling, assuming that you are not already at the rock bottom of the abyss of AI safety chilliness. In this category sits the unsafe AI safety. That seems like an oxymoron, but it has a straightforward meaning. It is quite conceivable that a role reversal can occur such that an embodiment in an AI system that was intended for AI safety purposes turns out to ironically and hazardously embed an entirely unsafe element into the AI. This can especially happen in AI systems that are known as being dual-use AI, see my coverage at the link here.

Remember to always abide by the Latin vow of primum non nocere, which specifically instills the classic Hippocratic oath to make sure that first, do no harm.

There are those that put in AI safety with perhaps the most upbeat of intentions, and yet shoot their foot and undermine the AI by having included something that is unsafe and endangering (which, metaphorically, shoots the feet of all other stakeholders and end-users too). Of course, evildoers might also take this path, and therefore either way we need to have suitable means to detect and verify the safeness or unsafe proneness of any AI — including those portions claimed to be devoted to AI safety.

It is the Trojan Horse of AI safety that sometimes in the guise of AI safety the inclusion of AI safety renders the AI into a horrendous basket full of unsafe AI.

Not good.

Okay, I trust that the aforementioned overview of some trends and insights about the AI safety landscape has whetted your appetite. We are now ready to proceed to the main meal.

Recap And Thoughts About The Stanford Workshop On AI Safety

I provide next a brief recap along with my own analysis of the various research efforts presented at the recent workshop on AI Safety that was conducted by the Stanford Center for AI Safety.

You are stridently urged to read the related papers or view the videos when they become available (see the link that I earlier listed for the Center’s website, plus I’ve provided some additional links in my recap below).

I respectively ask too that the researchers and presenters of the Workshop please realize that I am seeking to merely whet the appetite of readers or viewers in this recap and am not covering the entirety of what was conveyed. In addition, I am expressing my particular perspectives about the work presented and opting to augment or provide added flavoring to the material as commensurate with my existing style or panache of my column, versus strictly transcribing or detailing precisely what was pointedly identified in each talk. Thanks for your understanding in this regard.

I will now proceed in the same sequence of the presentations as they were undertaken during the Workshop. I list the session title, and the presenter(s), and then share my own thoughts that both attempt to recap or encapsulate the essence of the matter discussed and provide a tidbit of my own insights thereupon.

  • Session Title: “Run-time Monitoring for Safe Robot Autonomy”

Presentation by Dr. Marco Pavone

Dr. Marco Pavone is an Associate Professor of Aeronautics and Astronautics at Stanford University, and Director of Autonomous Vehicle Research at NVIDIA, plus Director of the Stanford Autonomous Systems Laboratory and Co-Director of the Center for Automotive Research at Stanford

Here’s my brief recap and erstwhile thoughts about this talk.

A formidable problem with contemporary Machine Learning (ML) and Deep Learning (DL) systems entails dealing with out-of-distribution (OOD) occurrences, especially in the case of autonomous systems such as self-driving cars and other self-driving vehicles. When an autonomous vehicle is moving along and encounters an OOD instance, the responsive actions to be undertaken could spell the difference between life-or-death outcomes.

I’ve covered extensively in my column the circumstances of having to deal with a plethora of fast-appearing objects that can overwhelm or confound an AI driving system, see the link here and the link here, for example. In a sense, the ML/DL might have been narrowly derived and either fail to recognize an OOD circumstance or perhaps equally worse treat the OOD as though it is within the confines of conventional inside-distribution occurrences that the AI was trained on. This is the classic dilemma of treating something as a false positive or a false negative and ergo having the AI take no action when it needs to act or taking devout action that is wrongful under the circumstances.

In this insightful presentation about safe robot autonomy, a keystone emphasis entails a dire need to ensure that suitable and sufficient run-time monitoring is taking place by the AI driving system to detect those irascible and often threatening out-of-distribution instances. You see, if the run-time monitoring is absent of OOD detection, all heck would potentially break loose since the chances are that the initial training of the ML/DL would not have adequately prepared the AI for coping with OOD circumstances. If the run-time monitoring is weak or inadequate when it comes to OOD detection, the AI might be driving blind or cross-eyed as it were, not ascertaining that a boundary breaker is in its midst.

A crucial first step involves the altogether fundamental question of being able to define what constitutes being out-of-distribution. Believe it or not, this is not quite as easy as you might so assume.

Imagine that a self-driving car encounters an object or event that computationally is calculated as relatively close to the original training set but not quite on par. Is this an encountered anomaly or is it just perchance at the far reaches of the expected set?

This research depicts a model that can be used for OOD detection, called Sketching Curvature for OOD Detection or SCOD. The overall idea is to equip the pre-training of the ML with a healthy dose of epistemic uncertainty. In essence, we want to carefully consider the tradeoff between the fraction of out-of-distribution that has been correctly flagged as indeed OOD (referred to as TPR, True Positive Rate), versus the fraction of in-distribution that is incorrectly flagged as being OOD when it is not, in fact, OOD (referred to as FPR, False Positive Rate).

Ongoing and future research posited includes classifying the severity of OOD anomalies, causal explanations that can be associated with anomalies, run-time monitor optimizations to contend with OOD instances, etc., and the application of SCOD to additional settings.

Use this link here for info about the Stanford Autonomous Systems Lab (ASL).

Use this link here for info about the Stanford Center for Automotive Research (CARS).

For some of my prior coverage discussing the Stanford Center for Automotive Research, see the link here.

  • Session Title: “Reimagining Robot Autonomy with Neural Environment Representations”

Presentation by Dr. Mac Schwager

Dr. Mac Schwager is an Associate Professor of Aeronautics and Astronautics at Stanford University and Director of the Stanford Multi-Robot Systems Lab (MSL)

Here’s my brief recap and erstwhile thoughts about this talk.

There are various ways of establishing a geometric representation of scenes or images. Some developers make use of point clouds, voxel grids, meshes, and the like. When devising an autonomous system such as an autonomous vehicle or other autonomous robots, you’d better make your choice wisely since otherwise the whole kit and kaboodle can be stinted. You want a representation that will aptly capture the nuances of the imagery, and that is fast, reliable, flexible, and proffers other notable advantages.

The use of artificial neural networks (ANNs) has gained a lot of traction as a means of geometric representation. An especially promising approach to leveraging ANNs is known as a neural radiance field or NeRF method.

Let’s take a look at a handy originating definition of what NeRF consists of: “Our method optimizes a deep fully-connected neural network without any convolutional layers (often referred to as a multilayer perceptron or MLP) to represent this function by regressing from a single 5D coordinate to a single volume density and view-dependent RGB color. To render this neural radiance field (NeRF) from a particular viewpoint we: 1) march camera rays through the scene to generate a sampled set of 3D points, 2) use those points and their corresponding 2D viewing directions as input to the neural network to produce an output set of colors and densities, and 3) use classical volume rendering techniques to accumulate those colors and densities into a 2D image. Because this process is naturally differentiable, we can use gradient descent to optimize this model by minimizing the error between each observed image and the corresponding views rendered from our representation (as stated in the August 2020 paper entitled NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by co-authors Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng).

In this fascinating talk about NeRF and fostering advances in robotic autonomy, there are two questions directly posed:

  • Can we use the NeRF density as a geometry representation for robotic planning and simulation?
  • Can we use NeRF photo rendering as a tool for estimating robot and object poses?

The presented answers are that yes, based on initial research efforts, it does appear that NeRF can indeed be used for those proposed uses.

Examples showcased include navigational uses such as via the efforts of aerial drones, grasp planning uses such as a robotic hand attempting to grasp a coffee mug, and differentiable simulation uses including a dynamics-augmented neural object (DANO) formulation. Various team members that participated in this research were also listed and acknowledged for their respective contributions to these ongoing efforts.

Use this link here for info about the Stanford Multi-Robot Systems Lab (MSL).

  • Session Title: “Toward Certified Robustness Against Real-World Distribution Shifts”

Presentation by Dr. Clark Barrett, Professor (Research) of Computer Science, Stanford University

Here’s my brief recap and erstwhile thoughts about this research.

When using Machine Learning (ML) and Deep Learning (DL), an important consideration is the all-told robustness of the resulting ML/DL system. AI developers might inadvertently make assumptions about the training dataset that ultimately gets undermined once the AI is put into real-world use.

For example, a demonstrative distributional shift can occur at run-time that catches the AI off-guard. A simple use case might be an image analyzing AI ML/DL system that though originally trained on clear-cut images later on gets confounded when encountering images at run-time that are blurry, poorly lighted, and contain other distributional shifts that were not encompassed in the initial dataset.

Integral to doing proper computational verification for ML/DL consists of devising specifications that are going to suitably hold up regarding the ML/DL behavior in realistic deployment settings. Having specifications that are perhaps lazily easy for ML/DL experimental purposes is well below the harsher and more demanding needs for AI that will be deployed on our roadways via autonomous vehicles and self-driving cars, driving along city streets and tasked with life-or-death computational decisions.

Key findings and contributions of this work per the researcher’s statements are:

  • Introduction of a new framework for verifying DNNs (deep neural networks) against real-world distribution shifts
  • Being the first to incorporate deep generative models that capture distribution shifts, e.g., changes in weather conditions or lighting in perception tasks—into verification specifications
  • Proposal of a novel abstraction-refinement strategy for transcendental activation functions
  • Demonstrating that the verification techniques are significantly more precise than existing techniques on a range of challenging real-world distribution shifts on MNIST and CIFAR-10.

For additional details, see the associated paper entitled Toward Certified Robustness Against Real-World Distribution Shifts, June 2022, by co-authors Haoze Wu, Teruhiro Tagomori, Alexandar Robey, Fengjun Yang, Nikolai Matni, George Pappas, Hamed Hassani, Corina Pasareanu, and Clark Barrett.

  • Session Title: “AI Index 2022”

Presentation by Daniel Zhang, Policy Research Manager, Stanford Institute for Human-Centered Artificial Intelligence (HAI), Stanford University

Here’s my brief recap and erstwhile thoughts about this research.

Each year, the world-renowned Stanford Institute for Human-Centered AI (HAI) at Stanford University prepares and releases a widely read and eagerly awaited “annual report” about the global status of AI, known as the AI Index. The latest AI Index is the fifth edition and was unveiled earlier this year, thus referred to as AI Index 2022.

As officially stated: “The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind. The 2022 AI Index report measures and evaluates the rapid rate of AI advancement from research and development to technical performance and ethics, the economy and education, AI policy and governance, and more. The latest edition includes data from a broad set of academic, private, and non-profit organizations as well as more self-collected data and original analysis than any previous editions” (per the HAI website; note that the AI Index 2022 is available as a downloadable free PDF at the link here).

The listed top takeaways consisted of:

  • Private investment in AI soared while investment concentration intensified
  • U.S. and China dominated cross-country collaborations on AI
  • Language models are more capable than ever, but also more biased
  • The rise of AI ethics everywhere
  • AI becomes more affordable and higher performing
  • Data, data, data
  • More global legislation on AI than ever
  • Robotic arms are becoming cheaper

There are about 230 pages of jampacked information and insights in the AI Index 2022 covering the status of AI today and where it might be headed. Prominent news media and other sources often quote the given stats or other notable facts and figures contained in Stanford’s HAI annual AI Index.

  • Session Title: “Opportunities for Alignment with Large Language Models”

Presentation by Dr. Jan Leike, Head of Alignment, OpenAI

Here’s my brief recap and erstwhile thoughts about this talk.

Large Language Models (LLM) such as GPT-3 have emerged as important indicators of advances in AI, yet they also have spurred debate and at times heated controversy over how far they can go and whether we might misleadingly or mistakenly believe that they can do more than they really can. See my ongoing and extensive coverage on such matters and particularly in the context of AI Ethics at the link here and the link here, just to name a few.

In this perceptive talk, there are three major points covered:

  • LLMs have obvious alignment problems
  • LLMs can assist human supervision
  • LLMs can accelerate alignment research

As a handy example of a readily apparent alignment problem, consider giving GPT-3 the task of writing a recipe that uses ingredients consisting of avocados, onions, and limes. If you gave the same task to a human, the odds are that you would get a reasonably sensible answer, assuming that the person was of a sound mind and willing to undertake the task seriously.

Per this presentation about LLMs limitations, the range of replies showcased via the use of GPT-3 varied based on minor variants of how the question was asked. In one response, GPT-3 seemed to dodge the question by indicating that a recipe was available but that it might not be any good. Another response by GPT-3 provided some quasi-babble such as “Easy bibimbap of spring chrysanthemum greens.” Via InstructGPT a reply appeared to be nearly on target, providing a list of instructions such as “In a medium bowl, combine diced avocado, red onion, and lime juice” and then proceeded to recommend additional cooking steps to be performed.

The crux here is the alignment considerations.

How does the LLM align with or fail to align to the stated request of a human making an inquiry?

If the human is seriously seeking a reasonable answer, the LLM should attempt to provide a reasonable answer. Realize that a human answering the recipe question might also spout babble, though at least we might expect the person to let us know that they don’t really know the answer and are merely scrambling to respond. We naturally might expect or hope that an LLM would do likewise, namely alert us that the answer is uncertain or a mishmash or entirely fanciful.

As I’ve exhorted many times in my column, an LLM ought to “know its limitations” (borrowing the famous or infamous catchphrase).

Trying to push LLMs forward toward better human alignment is not going to be easy. AI developers and AI researchers are burning the night oil to make progress on this assuredly hard problem. Per the talk, an important realization is that LLMs can be used to accelerate the AI and human alignment aspiration. We can use LLMs as a tool for these efforts. The research outlined a suggested approach consisting of these main steps: (1) Perfecting RL or Reinforcement Learning from human feedback, (2) AI-assisted human feedback, and (3) Automating alignment research.

  • Session Title: “Challenges in AI safety: A Perspective from an Autonomous Driving Company”

Presentation by James “Jerry” Lopez, Autonomy Safety and Safety Research Leader, Motional

Here’s my brief recap and erstwhile thoughts about this talk.

As avid followers of my coverage regarding autonomous vehicles and self-driving cars are well aware, I am a vociferous advocate for applying AI safety precepts and methods to the design, development, and deployment of AI-driven vehicles. See for example the link here and the link here of my enduring exhortations and analyses.

We must keep AI safety at the highest of priorities and the topmost of minds.

This talk covered a wide array of important points about AI safety, especially in a self-driving car context (the company, Motional, is well-known in the industry and consists of a joint venture between Hyundai Motor Group and Aptiv, for which the firm name is said to be a mashup of the words “motion” and “emotional” serving as a mixture intertwining automotive movement and valuation of human respect).

The presentation noted several key difficulties with today’s AI in general and likewise in particular to self-driving cars, such as:

  • AI is brittle
  • AI is opaque
  • AI can be confounded via an intractable state space

Another consideration is the incorporation of uncertainty and probabilistic conditions. The asserted “four horsemen” of uncertainty were described: (1) Classification uncertainty, (2) Track uncertainty, (3) Existence uncertainty, and (4) Multi-modal uncertainty.

One of the most daunting AI safety challenges for autonomous vehicles consists of trying to devise MRMs (Minimal Risk Maneuvers). Human drivers deal with this all the time while behind the wheel of a moving car. There you are, driving along, and all of a sudden a roadway emergency or other potential calamity starts to arise. How do you respond? We expect humans to remain calm, think mindfully about the problem at hand, and make a judicious choice of how to handle the car and either avoid an imminent car crash or seek to minimize adverse outcomes.

Getting AI to do the same is tough to do.

An AI driving system has to first detect that a hazardous situation is brewing. This can be a challenge in and of itself. Once the situation is discovered, the variety of “solving” maneuvers must be computed. Out of those, a computational determination needs to be made as to the “best” selection to implement at the moment at hand. All of this is steeped in uncertainties, along with potential unknowns that loom gravely over which action ought to be performed.

AI safety in some contexts can be relatively simple and mundane, while in the case of self-driving cars and autonomous vehicles there is a decidedly life-or-death paramount vitality for ensuring that AI safety gets integrally woven into AI driving systems.

  • Session Title: “Safety Considerations and Broader Implications for Governmental Uses of AI”

Presentation by Peter Henderson, JD/Ph.D. Candidate at Stanford University

Here’s my brief recap and erstwhile thoughts about this talk.

Readers of my columns are familiar with my ongoing clamor that AI and the law are integral dance partners. As I’ve repeatedly mentioned, there is a two-sided coin intertwining AI and the law. AI can be applied to law, doing so hopefully to the benefit of society all told. Meanwhile, on the other side of the coin, the law is increasingly being applied to AI, such as the proposed EU AI Act (AIA) and the draft USA Algorithmic Accountability Act (AAA). For my extensive coverage of AI and law, see the link here and the link here, for example.

In this talk, a similar dual-focus is undertaken, specifically with respect to AI safety.

You see, we ought to be wisely considering how we can enact AI safety precepts and capabilities into the governmental use of AI applications. Allowing governments to willy-nilly adopt AI and then trust or assume that this will be done in a safe and sensible manner is not a very hearty assumption (see my coverage at the link here). Indeed, it could be a disastrous assumption. At the same time, we should be urging lawmakers to sensibly put in place laws about AI that will incorporate and ensure some reasonable semblance of AI safety, doing so as a hardnosed legally required expectation for those devising and deploying AI.

Two postulated rules of thumb that are explored in the presentation include:

  • It’s not enough for humans to just be in the loop, they have to actually be able to assert their discretion. And when they don’t, you need a fallback system that is efficient.
  • Transparency and openness are key to fighting corruption and ensuring safety. But you have to find ways to balance that against privacy interests in a highly contextual way.

As a closing comment that is well worth emphasizing over and over again, the talk stated that we need to embrace decisively both a technical and a regulatory law mindset to make AI Safety well-formed.

  • Session Title: “Research Update from the Stanford Intelligent Systems Laboratory”

Presentation by Dr. Mykel Kochenderfer, Associate Professor of Aeronautics and Astronautics at Stanford University and Director of the Stanford Intelligent Systems Laboratory (SISL)

Here’s my brief recap and erstwhile thoughts about this talk.

This talk highlighted some of the latest research underway by the Stanford Intelligent Systems Laboratory (SISL), a groundbreaking and extraordinarily innovative research group that is at the forefront of exploring advanced algorithms and analytical methods for the design of robust decision-making systems. I can highly recommend that you consider attending their seminars and read their research papers, a well-worth instructive and engaging means to be aware of the state-of-the-art in intelligent systems (I avidly do so).

Use this link here for official info about SISL.

The particular areas of interest to SISL consist of intelligent systems for such realms as Air Traffic Control (ATC), uncrewed aircraft, and other aerospace applications wherein decisions must be made in complex and uncertain, dynamic environments, meanwhile seeking to maintain sufficient safety and efficacious efficiency. In brief, robust computational methods for deriving optimal decision strategies from high-dimensional, probabilistic problem representations are at the core of their endeavors.

At the opening of the presentation, three key desirable properties associated with safety-critical autonomous systems were described:

  • Accurate Modeling – encompassing realistic predictions, modeling of human behavior, generalizing to new tasks and environments
  • Self-Assessment – interpretable situational awareness, risk-aware designs
  • Validation and Verification – efficiency, accuracy

In the category of Accurate Modeling, these research efforts were briefly outlined (listed here by the title of the efforts):

  • LOPR: Latent Occupancy Prediction using Generative Models
  • Uncertainty-aware Online Merge Planning with Learned Driver Behavior
  • Autonomous Navigation with Human Internal State Inference and Spatio-Temporal Modeling
  • Experience Filter: Transferring Past Experiences to Unseen Tasks or Environments

In the category of Self-Assessment, these research efforts were briefly outlined (listed here by the title of the efforts):

  • Interpretable Self-Aware Neural Networks for Robust Trajectory Prediction
  • Explaining Object Importance in Driving Scenes
  • Risk-Driven Design of Perception Systems

In the category of Validation and Verification, these research efforts were briefly outlined (listed here by the title of the efforts):

  • Efficient Autonomous Vehicle Risk Assessment and Validation
  • Model-Based Validation as Probabilistic Inference
  • Verifying Inverse Model Neural Networks

In addition, a brief look at the contents of the impressive book Algorithms For Decision Making by Mykel Kochenderfer, Tim Wheeler, and Kyle Wray was explored (for more info about the book and a free electronic PDF download, see the link here).

Future research projects either underway or being envisioned include efforts on explainability or XAI (explainable AI), out-of-distribution (OOD) analyses, more hybridization of sampling-based and formal methods for validation, large-scale planning, AI and society, and other projects including collaborations with other universities and industrial partners.

  • Session Title: “Learning from Interactions for Assistive Robotics”

Presentation by Dr. Dorsa Sadigh, Assistant Professor of Computer Science and of Electrical Engineering at Stanford University

Here’s my brief recap and erstwhile thoughts about this research.

Let’s start with a handy scenario about the difficulties that can arise when devising and using AI.

Consider the task of stacking cups. The tricky part is that you aren’t stacking the cups entirely by yourself. A robot is going to work with you on this task. You and the robot are supposed to work together as a team.

If the AI underlying the robot is not well-devised, you are likely to encounter all sorts of problems with what otherwise would seem to be an extremely easy task. You put one cup on top of another and then give the robot a chance to place yet another cup on top of those two cups. The AI selects an available cup and tries gingerly to place it atop the other two. Sadly, the cup chosen is overly heavy (bad choice) and causes the entire stack to fall to the floor.

Imagine your consternation.

The robot is not being very helpful.

You might be tempted to forbid the robot from continuing to stack cups with you. But, assume that you ultimately do need to make use of the robot. The question arises as to whether the AI is able to figure out the cup stacking process, doing so partially by trial and error but also as a means of discerning what you are doing when stacking the cups. The AI can potentially “learn” from the way in which the task is being carried out and how the human is performing the task. Furthermore, the AI could possibly ascertain that there are generalizable ways of stacking the cups, out of which you the human here have chosen a particular means of doing so. In that case, the AI might seek to tailor its cup stacking efforts to your particular preferences and style (don’t we all have our own cup stacking predilections).

You could say that this is a task involving an assistive robot.

Interactions take place between the human and the assistive robot. The goal here is to devise the AI such that it can essentially learn from the task, learn from the human, and learn how to perform the task in a properly assistive manner. Just as we wanted to ensure that the human worked with the robot, we don’t want the robot to somehow arrive at a computational posture that will simply circumvent the human and do the cup stacking on its own. They must collaborate.

The research taking place is known as the ILIAD initiative and has this overall stated mission: “Our mission is to develop theoretical foundations for human-robot and human-AI interaction. Our group is focused on: 1) Formalizing interaction and developing new learning and control algorithms for interactive systems inspired by tools and techniques from game theory, cognitive science, optimization, and representation learning, and 2) Developing practical robotics algorithms that enable robots to safely and seamlessly coordinate, collaborate, compete, or influence humans (per the Stanford ILIAD website at the link here).

Some of the key questions being pursued as part of the focus on learning from interactions (there are other areas of focus too) include:

  • How can we actively and efficiently collect data in a low data regime setting such as in interactive robotics?
  • How can we tap into different sources and modalities —- perfect and imperfect demonstrations, comparison and ranking queries, physical feedback, language instructions, videos —- to learn an effective human model or robot policy?
  • What inductive biases and priors can help with effectively learning from human/interaction data?

Conclusion

You have now been taken on a bit of a journey into the realm of AI safety.

All stakeholders including AI developers, business and governmental leaders, researchers, ethicists, lawmakers, and others have a demonstrative stake in the direction and acceptance of AI safety. The more AI that gets flung into society, the more we are taking on heightened risks due to the existent lack of awareness about AI safety and the haphazard and at times backward ways in which AI safety is being devised in contemporary widespread AI.

A proverb that some trace to the novelist Samuel Lover in one of his books published in 1837, and which has forever become an indelible presence even today, serves as a fitting final comment for now.

What was that famous line?

It is better to be safe than sorry.

Enough said, for now.

By ev3v4hn