GLOBAL RESEARCH SYNDICATE
No Result
View All Result
  • Login
  • Latest News
  • Consumer Research
  • Survey Research
  • Marketing Research
  • Industry Research
  • Data Collection
  • More
    • Data Analysis
    • Market Insights
  • Latest News
  • Consumer Research
  • Survey Research
  • Marketing Research
  • Industry Research
  • Data Collection
  • More
    • Data Analysis
    • Market Insights
No Result
View All Result
globalresearchsyndicate
No Result
View All Result
Home Data Collection

Interview With Steve Eglash, Stanford University

globalresearchsyndicate by globalresearchsyndicate
July 12, 2020
in Data Collection
0
Interview With Steve Eglash, Stanford University
0
SHARES
5
VIEWS
Share on FacebookShare on Twitter

One of the often cited challenges with AI is the inability to get well-understood explanations of how the AI systems are making decisions. While this might not be a challenge for machine learning applications such as product recommendations or personalization scenarios, any use of AI in critical applications where decisions need to be understood face transparency and explainability issues.

On a recent AI Today podcast, Steve Eglash, Director of Strategic Research Initiatives in the Computer Science Department at Stanford University shared insights and research into the evolution of transparent and responsible AI. Professor Eglash is a staff member in the Computer Science Department where he works with a small group to run research programs that work with companies outside of the university. The small group helps companies share views and technology with students and students to share technology with companies. Before being at Stanford, Steve was an electrical engineer. In this position he was between technology and science. He has also worked in investments, government, research, before he finally moved into academia.

Steve Eglash, Stanford

Steve Eglash, Stanford


Steve Eglash, Stanford

As AI is being used throughout every industry and in governments, the opportunities for diving deeper into AI use provides Stanford students with a lot of opportunities to explore new areas of interest. Understanding sufficiently how artificial intelligence works is crucial because we are increasingly relying on it for a range of applications that are being put into mission critical roles, such as autonomous vehicles. In these scenarios, a mistake can be fatal or cause serious harm or injury. As such, diving deeper into transparent and explainable AI systems can serve to make those systems more trustworthy and reliable. Ensuring safety in AI technology such as automated vehicles is crucial. We need to ensure that AI is able to operate safely. As such, we need to be able to understand how and why a computer makes its decisions. At the same time, we want to be able to analyze the decisions of a computer after an incident.

Many modern AI systems run on neural networks which we only understand the basics of, since the algorithms themselves provide little in the way of explanations. This lack of explainability is often referred to as a “black box” for AI systems. Researchers are turning their focus on the details of how neural networks work. Because of the size of neural networks, it can be hard to check them for errors. Each connection between neurons and their weights adds levels of complexity that makes examination of decisions after-the-fact very difficult. 

Reluplex – An Approach to Transparent AI

Verification is the process of proving the properties of neural networks. Reluplex is a program that was designed recently by a number of people to test large neural networks. The technology behind Reluplex allows it to quickly operate across large neural networks. Reluplex was used to test an airborne collision detection and avoidance system for autonomous drones. When it was used, the program was able to prove that some parts of the network worked as it should. However, it was also able to find an error with the network which was able to be fixed in the next implementation.

Interpretability is another area that Steve brought up when it comes to this black box idea. If you have a large model, is it possible to understand how a model makes predictions? He uses the example of an image identification system trying to understand the picture of a dog on the beach. There are two ways that it could identify the dog. The AI could take the pixels that make up the dog and associate it with a dog. On the other hand, it could take the pixels of the beach and the sky around the dog to create an understanding that the dog was there. Without an understanding of how the system is coming to those decisions, you don’t know what exactly the network is actually being trained on.

If an AI uses the first method to understand a dog is present, it is thinking in a rational way that might simulate how our own brains work. The alternate method, however, can be seen as a weak association because it isn’t relying on the actual part of the picture that contains the dog. To confirm that an AI is processing images properly, we need to know how it does that and a good portion of research is going into this task and those that are similar to it.

Exploring Data Bias

Data bias of AI systems is also a focus at Stanford. AI systems have been found to have a fair amount of bias based on the data being used to train the machine learning models. The data that is used by the AI to make decisions can often lead to bias because the computer does not have the information it needs to make an unbiased analysis. Besides the issue of biased data, the systems themselves can be biased in decision making by factoring in only specific groups. When you train a machine learning model to lean towards larger groups in data it is then likely to be biased towards those larger groups.

We need to remove bias from AI systems as they increase in teractions with humans. AI is now making decisions on humans such as insurance qualification, likelihood of a person reoffending, and other potentially life-changing decision making. The decisions that AI makes have real world consequences and we don’t want computers to perpetuate inequality and injustice.

To remove bias from AI, data scientists need to analyze the AI and make decisions based on societal bias. To this point, Professor Percy Liang is working with his students to create distributionally robust optimization which is aiming to move away from demographics and toward the power of the machine to focus on all groups. Other researchers are working to focus on fairness and equality in artificial intelligence.

Since AI systems have not yet proven their explainability and complete trustworthiness, Steve thinks AI will be mostly used in an augmented and assisted manner, rather than solely autonomous. By keeping the human in the loop, we can get a better change to keep an eye when the system is making questionable decisions and exert more control on the final outcome of AI-assisted actions.

Related Posts

How Machine Learning has impacted Consumer Behaviour and Analysis
Consumer Research

How Machine Learning has impacted Consumer Behaviour and Analysis

January 4, 2024
Market Research The Ultimate Weapon for Business Success
Consumer Research

Market Research: The Ultimate Weapon for Business Success

June 22, 2023
Unveiling the Hidden Power of Market Research A Game Changer
Consumer Research

Unveiling the Hidden Power of Market Research: A Game Changer

June 2, 2023
7 Secrets of Market Research Gurus That Will Blow Your Mind
Consumer Research

7 Secrets of Market Research Gurus That Will Blow Your Mind

May 8, 2023
The Shocking Truth About Market Research Revealed!
Consumer Research

The Shocking Truth About Market Research: Revealed!

April 25, 2023
market research, primary research, secondary research, market research trends, market research news,
Consumer Research

Quantitative vs. Qualitative Research. How to choose the Right Research Method for Your Business Needs

March 14, 2023
Next Post
Global Extension Cable Market 2019 Trends, Segmentation, Swot Analysis, Opportunities And Forecast To 2025 – Skyline Gazette

Global Truck Market 2019 Trends, Segmentation, Swot Analysis, Opportunities And Forecast To 2025 – Jewish Market Reports

Categories

  • Consumer Research
  • Data Analysis
  • Data Collection
  • Industry Research
  • Latest News
  • Market Insights
  • Marketing Research
  • Survey Research
  • Uncategorized

Recent Posts

  • Ipsos Revolutionizes the Global Market Research Landscape
  • How Machine Learning has impacted Consumer Behaviour and Analysis
  • Market Research: The Ultimate Weapon for Business Success
  • Privacy Policy
  • Terms of Use
  • Antispam
  • DMCA

Copyright © 2024 Globalresearchsyndicate.com

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT
No Result
View All Result
  • Latest News
  • Consumer Research
  • Survey Research
  • Marketing Research
  • Industry Research
  • Data Collection
  • More
    • Data Analysis
    • Market Insights

Copyright © 2024 Globalresearchsyndicate.com