Quote from sachinm on 2 October 2023, 1:23 pmIdentify Industry Specific Challenges relevant to the Innovation (including Organisational Resilience) SIG to be included in the Practice Guide.
Set the challenge, for other SIG members to identify potential solutions, by pooling their collective experience and knowledge.
Press "reply" to share your story...
Identify Industry Specific Challenges relevant to the Innovation (including Organisational Resilience) SIG to be included in the Practice Guide.
Set the challenge, for other SIG members to identify potential solutions, by pooling their collective experience and knowledge.
Press "reply" to share your story...
Quote from sachinm on 6 October 2023, 11:01 pmTo get the ball rolling here, we can consider the unintended effects and increased complexity from new innovation...
For example, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, allowing users to interact via human-like conversation, composing songs and summarising lengthy documents. Since its release last year, OpenAI's ChatGPT has prompted rivals to accelerate developing similar large language models and companies including Alphabet Inc (GOOGL.O) are racing to steep their products in AI.
However, Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.
The letter was signed by more than 1,000 people. Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the "godfathers of AI", and Stuart Russell, a pioneer of research in the field.
It posed the following questions:
- Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?
- Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
- Should we risk loss of control of our civilisation? Such decisions must not be delegated to unelected tech leaders?
What other challenges do you think new innovation brings in managing complexity, and developing an "adaptable" regulatory framework around AI?
The “Blueprint For An AI Bill Of Rights” (OSTP, 2022) which was released by the Office of Science and Technology Policy (OSTP), laid down a set of five principles that should be followed by AI systems and focuses specifically on Algorithmic Discrimination Protections.
The Blueprint said “You should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
The blueprint highlighted the dangers from algorithmic discrimination as unjustified different treatment based on demographics like race, color, ethnicity, gender identity, sexual orientation, religion, disability, age, and so on. It showed that CHATGPT in fact discriminates algorithmically, and found that that different demographics are treated differently by the model.
What other potential unintended risks will new innovation will bring? How can we assess, and diagnose the potential areas of complexity?
To get the ball rolling here, we can consider the unintended effects and increased complexity from new innovation...
For example, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, allowing users to interact via human-like conversation, composing songs and summarising lengthy documents. Since its release last year, OpenAI's ChatGPT has prompted rivals to accelerate developing similar large language models and companies including Alphabet Inc (GOOGL.O) are racing to steep their products in AI.
However, Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.
The letter was signed by more than 1,000 people. Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the "godfathers of AI", and Stuart Russell, a pioneer of research in the field.
It posed the following questions:
What other challenges do you think new innovation brings in managing complexity, and developing an "adaptable" regulatory framework around AI?
The “Blueprint For An AI Bill Of Rights” (OSTP, 2022) which was released by the Office of Science and Technology Policy (OSTP), laid down a set of five principles that should be followed by AI systems and focuses specifically on Algorithmic Discrimination Protections.
The Blueprint said “You should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
The blueprint highlighted the dangers from algorithmic discrimination as unjustified different treatment based on demographics like race, color, ethnicity, gender identity, sexual orientation, religion, disability, age, and so on. It showed that CHATGPT in fact discriminates algorithmically, and found that that different demographics are treated differently by the model.
What other potential unintended risks will new innovation will bring? How can we assess, and diagnose the potential areas of complexity?
Quote from sachinm on 12 November 2023, 11:27 amRisks posed by Artificial Intelligence
In the letter signed by Tesla and SpaceX founder Elon Musk, including Apple co-founder Steve Wozniak, along with over 1,000 other tech leaders, urged in a 2023 open letter to pause the development of advanced AI systems on large AI experiments, citing that the technology can “pose profound risks to society and humanity.”
In the letter, the leaders said:
"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."
They identified the following risks posed by Artificial Intelligence (AI):
- Automation-spurred job loss
- Deepfakes
- Privacy violations
- Algorithmic bias caused by bad data
- Socioeconomic inequality
- Market volatility
- Weapons automatization
- Uncontrollable self-aware AI
Job Displacement
AI-driven automation has the potential to lead to job losses across various industries, particularly for low-skilled workers. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change — according to McKinsey. Goldman Sachs even states 300 million full-time jobs could be lost to AI automation.
However, even skilled and middle-classed roles are poised for “a massive shakeup" as technology strategist Chris Messina has pointed out, fields like law and accounting are primed for an AI takeover.
Source:https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
Misinformation and Manipulation
AI-generated content, such as deepfakes, contributes to the spread of false information and the manipulation of public opinion. Efforts to detect and combat AI-generated misinformation are critical in preserving the integrity of information in the digital age. However, these checks rely on human intervention, and there is a lack of translators allowing harmful material to pass through.
In a Stanford University study on the most pressing dangers of AI, researchers said:
“AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”
Also, the law has yet to catch up with new Cyber risks. In the UK, police say faked audio purporting to capture the Mayor of London calling for Armistice Day to be re-scheduled for a pro-Palestinian march "does not constitute a criminal offence".
Privacy Concerns
AI technologies often collect and analyse large amounts of personal data, raising issues related to data privacy and security. Often people do not realise that every time they use a search engine, AI chatbot or tried out an AI face filter online, your data is being collected — but where is it going and how is it being used?
To mitigate privacy risks, we must advocate for strict data protection regulations and safe data handling practices. As a prime example of tracking peoples' movement, in China they use of facial recognition technology in offices, schools and other venues. Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, which disproportionately impact Black communities.
Bias & Discrimination
AI systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design. To minimise discrimination and ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training data sets. As an example, we often see speech-recognition AI often fail to understand certain dialects and accents, or why companies fail to consider the consequences of a chatbot impersonating notorious figures in human history.
Economic Inequality
AI development risks seeing a concentration of power, with domination by a small number of large corporations and governments which could exacerbate inequality and limit diversity in AI applications. The concentration of AI development and ownership within a small number of large corporations and governments can exacerbate this inequality as they accumulate wealth and power while smaller businesses struggle to compete.
How can we encourage decentralised and collaborative AI development to avoid a concentration of power? Legal systems must evolve to keep pace with technological advancements and protect the rights of everyone against unique issues arising from AI technologies, including liability and intellectual property rights.
Financial Crises brought about by AI algorithms
Algorithmic trading could be responsible for our next major financial crisis in the markets, as these algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility like instances like the 2010 Flash Crash and the Knight Capital Flash Crash.
Companies should consider whether AI raises or lowers their confidence before introducing the technology.
Uncontrollable self-aware AI
Technologist identify developing self-aware AI as one of their stated aims. However, these Deep Learning Models may lead to diminished empathy, social skills, and human connections. This can lead to a lack of balance between technology and human interaction, and opaqueness in decision-making processes and underlying logic of these technologies.
In the letter signed by Tesla and SpaceX founder Elon Musk, including Apple co-founder Steve Wozniak, along with over 1,000 other tech leaders, urged in a 2023 open letter to pause the development of advanced AI systems on large AI experiments, citing that the technology can “pose profound risks to society and humanity.”
In the letter, the leaders said:
"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."
They identified the following risks posed by Artificial Intelligence (AI):
AI-driven automation has the potential to lead to job losses across various industries, particularly for low-skilled workers. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change — according to McKinsey. Goldman Sachs even states 300 million full-time jobs could be lost to AI automation.
However, even skilled and middle-classed roles are poised for “a massive shakeup" as technology strategist Chris Messina has pointed out, fields like law and accounting are primed for an AI takeover.
Source:https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
AI-generated content, such as deepfakes, contributes to the spread of false information and the manipulation of public opinion. Efforts to detect and combat AI-generated misinformation are critical in preserving the integrity of information in the digital age. However, these checks rely on human intervention, and there is a lack of translators allowing harmful material to pass through.
In a Stanford University study on the most pressing dangers of AI, researchers said:
“AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”
Also, the law has yet to catch up with new Cyber risks. In the UK, police say faked audio purporting to capture the Mayor of London calling for Armistice Day to be re-scheduled for a pro-Palestinian march "does not constitute a criminal offence".
AI technologies often collect and analyse large amounts of personal data, raising issues related to data privacy and security. Often people do not realise that every time they use a search engine, AI chatbot or tried out an AI face filter online, your data is being collected — but where is it going and how is it being used?
To mitigate privacy risks, we must advocate for strict data protection regulations and safe data handling practices. As a prime example of tracking peoples' movement, in China they use of facial recognition technology in offices, schools and other venues. Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, which disproportionately impact Black communities.
AI systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design. To minimise discrimination and ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training data sets. As an example, we often see speech-recognition AI often fail to understand certain dialects and accents, or why companies fail to consider the consequences of a chatbot impersonating notorious figures in human history.
AI development risks seeing a concentration of power, with domination by a small number of large corporations and governments which could exacerbate inequality and limit diversity in AI applications. The concentration of AI development and ownership within a small number of large corporations and governments can exacerbate this inequality as they accumulate wealth and power while smaller businesses struggle to compete.
How can we encourage decentralised and collaborative AI development to avoid a concentration of power? Legal systems must evolve to keep pace with technological advancements and protect the rights of everyone against unique issues arising from AI technologies, including liability and intellectual property rights.
Algorithmic trading could be responsible for our next major financial crisis in the markets, as these algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility like instances like the 2010 Flash Crash and the Knight Capital Flash Crash.
Companies should consider whether AI raises or lowers their confidence before introducing the technology.
Technologist identify developing self-aware AI as one of their stated aims. However, these Deep Learning Models may lead to diminished empathy, social skills, and human connections. This can lead to a lack of balance between technology and human interaction, and opaqueness in decision-making processes and underlying logic of these technologies.
This website uses cookies to ensure you get the best experience on our website.