Uncategorized Archives - Ross Dawson Keynote speaker | Futurist | Strategy advisor Sun, 05 Nov 2023 23:04:26 +0000 en-US hourly 1 https://rossdawson.com/wp-content/uploads/2018/06/cropped-head_square_512-32x32.png Uncategorized Archives - Ross Dawson 32 32 The most dangerous idea ever is that humans will be vastly transcended by AI https://rossdawson.com/most-dangerous-idea-ever-humans-transcended-ai/ https://rossdawson.com/most-dangerous-idea-ever-humans-transcended-ai/#respond Sun, 05 Nov 2023 23:04:26 +0000 https://rossdawson.com/?p=23580 The advent of next-generation AI has brought into sharp focus one of the biggest divides of all: our perception of humanity’s place in the Universe.

I endlessly read people arguing that humans will be to AI as animals or insects are to humans. They envision a future where AI’s relentless advancement transcends every faculty we possess.

The countervailing stance is that human potential is unlimited. We have deliberately and consistently increased our capabilities and knowledge, and now we will use the tools we have created to continue to advance.

The rise of AI has intensified this debate, leading us to question: Are we, as humans, inherently limited or unlimited?

My thinking on this was clarified and crystalized by reading David Deutsch’s seminal work “The Beginning of Infinity”, in which he lays out an extensive and powerful case for human knowledge and abilities being unbounded.

He argues that humans are “universal explainers” who have created and learned to use scientific principles to indefinitely improve our theories, knowledge, and understanding,

At any point our understanding is limited and incorrect, but by continuing to apply the same principles we will consistently and indefinitely advance our knowledge.

He writes:

The astrophysicist Martin Rees has speculated that somewhere in the universe ‘there could be life and intelligence out there in forms we can’t conceive. Just as a chimpanzee can’t understand quantum theory, it could be there are aspects of reality that are beyond the capacity of our brains.’ But that cannot be so. For if the ‘capacity’ in question is mere computational speed and amount of memory, then we can understand the aspects in question with the help of computers – just as we have understood the world for centuries with the help of pencil and paper. As Einstein remarked, ‘My pencil and I are more clever than I.’

In terms of computational repertoire, our computers – and brains – are already universal (see Chapter 6). But if the claim is that we may be qualitatively unable to understand what some other forms of intelligence can – if our disability cannot be remedied by mere automation – then this is just another claim that the world is not explicable. Indeed, it is tantamount to an appeal to the supernatural, with all the arbitrariness that is inherent in such appeals, for if we wanted to incorporate into our world view an imaginary realm explicable only to superhumans, we need never have bothered to abandon the myths of Persephone and her fellow deities.

So human reach is essentially the same as the reach of explanatory knowledge itself.

The track record of humanity, which in just the last few thousand years has grown from superstitions to understanding in great depth our Universe from sub-atomic particles to the structure of the cosmos, sitting on a planet in a far-flung galaxy, is testament to our ability to develop knowledge.

Indeed, the pace of (human) scientific progress and knowledge is not just fast, it is accelerating, by just about any measure you choose.

People compare the exponential technologies underlying AI with our finite cognition and conclude we will be left behind. This is so deeply flawed that I will address this in more detail in another post.

In short, humans are very clearly not static. We co-evolve with the technologies we have created, constantly extending the boundaries of what it means to be human.

It is possible that to keep pace we will need to augment ourselves with brain-computer interfaces and other cognitive amplification technologies. As I wrote in my 2002 book Living Networks,

the real issue is not whether humans will be replaced by machines, because at the same time as computing technology is progressing, people are merging with machines. If machines take over the world, we will be those machines.

Believing we’re doomed to be dwarfed by super-intelligent AI is essentially betting against humanity.

It is giving up. It is lacking faith in humanity. It’s a surrender to superstition, to the idea that there is something undefined and unknowable beyond our capacity to imagine or understand.

The essence of being human is this: We face and solve problems and we progress. The concept of things we cannot understand goes against everything that humans have demonstrated themselves to be.

In an era defined by the rapid rise of AI, it is crucial that we maintain faith in the human capacity for limitless growth and expansion. The unfolding story of our species is not one of succumbing to imagined limits but one of constantly redefining what is possible.

Our future is inevitably one of Humans + AI – our species amplified by the intelligences we have created.

We will not be subsidiary players in this union. Our ability to understand and grow and frame what this incredible pairing can achieve is unlimited.

At this point we have no solid evidence how this will turn out. It is your choice:

Believe humans are intrinsically limited and that we will be as cockroaches to superior intelligences.

Or bet on a species that is intelligent and adaptable enough to have created everything we have so far, and our ability to continue to progress and grow, harnessing the power of our inventions.

]]>
https://rossdawson.com/most-dangerous-idea-ever-humans-transcended-ai/feed/ 0
University education still matters, especially for generational economic mobility https://rossdawson.com/university-education-still-matters-especially-for-generational-economic-mobility/ https://rossdawson.com/university-education-still-matters-especially-for-generational-economic-mobility/#respond Sun, 29 Oct 2023 00:59:00 +0000 https://rossdawson.com/?p=23529 Formal education is critical for generational mobility, allowing young people to transcend engrained perceptions to not just learn, but demonstrate their capabilities by recognised paths.

Jose Luis Alvarado, dean of the Fordham Graduate School of Education, has written an excellent counter-narrative to those saying that tertiary education doesn’t matter any more, om the deep inequity of the anti-college movement. He shares how he was told at school he shouldn’t aspire to going to college. Others didn’t see his potential, quite possibly because of his family background.

I have long pointed to the decreasing relevance of higher education.

Employers are finding real-world capabilities and peer esteem are better indicators of performance than exam-assessed degrees.

Educational programs are often out of date while they are taught, let alone when students graduate.

Young entrepreneurs can arguably learn more by doing than by attending any less-than-excellent educational course.

Yet it’s absolutely true that these views come from a position of privilege.

The quality of tertiary education absolutely needs to improve and be more relevant to a rapidly changing world.

But its very existence offers pathways to anyone to achieve anything, not just entrepreneurial, but in every facet of society.

Which leaves us with the challenge of ensuring that everyone, regardless of wealth or background, has clear access to quality tertiary education, and full encouragement to pursue that if they want to.

That is at the heart of a fair society.

]]>
https://rossdawson.com/university-education-still-matters-especially-for-generational-economic-mobility/feed/ 0
David Droga at SxSW Sydney on creativity and AI https://rossdawson.com/david-droga-at-sxsw-sydney-on-creativity-and-ai/ https://rossdawson.com/david-droga-at-sxsw-sydney-on-creativity-and-ai/#respond Sat, 21 Oct 2023 23:16:04 +0000 https://rossdawson.com/?p=23412 David Droga is absolutely someone I wanted to hear from at SxSW Sydney (among many other claims to fame he is the most awarded creative ever at Cannes Lion and CEO of the $16 billion agency Accenture Song).

Creativity was long supposed to be last bastion of human dominion over machines,. Yet over the last 18 months that has been cast into doubt. So what is the future of creativity in a world in which AI is – in some ways at least – becoming creative? It’s best to get it in David’s words. Here are some of most interesting quotes I captured from the session.

“My starting point is I don’t think all creativity needs to survive.”

“I just think that creative is going to thrive and survive no matter what duress or what rears its head.”

“You know what, it’s just going to amplify and enhance.”

“We have to let go of being nostalgic about what creativity is. Success is not nostalgic, neither is creativity.”

“The CEOs and the CTOs and growth officers… are looking for these creative people because clearly these people ask different questions. If you ask different questions, you get different answers.”

“We’ve all probably been lectured by some client or someone’s told us the triangle of speed, quality and cost: pick two. I grew up with that whole thing. Three, you know what you, need all three now. Technology can allow us to do all three, you can do things at pace, you can do things that are high quality, and you can do that at an affordable cost.”

[In the context of selling his agency Droga5 to Accenture to create Accenture Song] “I don’t want people to have to choose between the march of technology and the purity of creativity and only one of them could survive. They both need each other. Creativity needs technology to be real. Technology needs creativity to become to be more relatable and human.”

“AI could write the next version of Fast and Furious, you could plug that in right now and I’ll give you 10, 11, 12, 13 and 14. Gen AI is not going to write Barbie. It’s not doing because that’s that’s a different take on things, that takes a type of mindset that is leaps and irreverence and quirks and all these different things that make us who we are.”

“Look at the sort of industries that disappeared within our industry as it merged typesetters, storyboard artists, all these things that were crucial and part of the ecosystem just evaporated. Many people found new ways to position themselves; technology is irrepressible. So when we accept that it’s irrepressible. Then you start to work out’ “okay, what’s my take on that?’ That’s why I say to the creative people, stop thinking about what’s going to make you redundant, start thinking about how you could shape it and influence it. Because that’s what it needs.”

 

]]>
https://rossdawson.com/david-droga-at-sxsw-sydney-on-creativity-and-ai/feed/ 0
Four pillars boards need to understand about generative AI https://rossdawson.com/how-boards-need-to-be-thinking-about-generative-ai/ https://rossdawson.com/how-boards-need-to-be-thinking-about-generative-ai/#respond Sat, 07 Oct 2023 00:04:03 +0000 https://rossdawson.com/?p=23211

Generative AI is moving past being a buzzword to being woven into the fabric of business strategy and operations. It will undounbtedly lead to innovation, reconfiguration, and transformation across sectors. As it moves to the heart of business models and work structures, boards and executives must not only understand but adeptly navigate its complexities.

Responding to the results of a survey of board members, leading business thinker Tom Davenport asks Are Boards Kidding Themselves About Generative AI? In particular he points to their claimed degree of expertise in generative AI.

Source: Alteryx, What Boardroom Leaders Think About Generative AI 

Given the scope of what boards need to comprehend around generative AI, there is clearly a gap between perceived knowledge and actual understanding. This is absolutely not just about understanding the technology, it is about having frameworks for considering the long-reaching and still-unfolding implications of generative AI.

Four Pillars for Boards to Understand

  1. Technical Foundations: While board members need not be AI engineers, a grasp of the foundational principles – the mechanics and limitations – is essential. They should understand the basics, such as the difference between generative models and analytic models, and the data and resources that power these AIs.
  2. Evolving AI Ecosystem: The AI landscape is dynamic. New startups, innovations, and shifts in industry standards mean that what’s relevant today might be outdated tomorrow. Boards should be cognizant of the changing players, platforms, and products.
  3. Practical Applications and Risks: AI isn’t a magic bullet. Identifying where it adds genuine value versus where it’s mere tech for tech’s sake is crucial. Alongside this, recognizing the pitfalls, from biased outputs to security concerns, is equally vital.
  4. Structural Implications: Beyond today’s use cases, boards should be visionary, anticipating how AI could transform industries, economies, and societies, uncover new business models, and likely redefine the nature of work.

In my engagements with various boards, it’s apparent that those who thrive are not those who deem themselves experts but those who are constantly curious. An adaptive mindset, rather than a fixed one, allows for agility in a world where AI’s path often zigzags rather than moving linearly.

The Proactive Role of Boards

Given the monumental influence of generative AI, it’s not enough for boards to be reactive. Instead, they must be proactive in shaping their organizations’ AI journey. This involves:

  • Continuous learning: Embracing workshops, seminars, and collaborative sessions with AI specialists to bridge knowledge gaps.
  • Establishing frameworks: Building considered frameworks for the pathways and relevance of generative AI, including governance, strategic priorities, work impact, including the use of scenario planning as a valuable tool.
  • Ethical considerations: Establishing clear guidelines and protocols that ensure the ethical deployment of AI, addressing biases, transparency, and fairness.

As generative AI continues to progress, the onus lies on boards to be stewards of positive transformation. The intersection of AI and business contains vast potential and significant challenges. The future is deeply uncertain, but with informed, agile, and visionary leadership, boards can steer their organizations towards a promising AI-augmented future.

]]> https://rossdawson.com/how-boards-need-to-be-thinking-about-generative-ai/feed/ 0 What the democratization of software development means for organizations https://rossdawson.com/democratization-software-development-organizations/ https://rossdawson.com/democratization-software-development-organizations/#respond Thu, 05 Oct 2023 23:27:26 +0000 https://rossdawson.com/?p=23203 Low-code and no-code software development have been around for a while. Now the rise of AI-assisted software development is pushing the power of software creation to the next level. This provides big opportunities but also risks that need to be managed.

Empowering innovation

The massive opportunity is to drive innovation and faster iteration, by empowering domain experts who know what an application should do, even if they don’t have the coding skills to execute themselves. The communication gap between what a user needs and the developer is a massive inefficiency.

Risks of citizen development

As developers often point out in response, users don’t actually know what they want, and the value they provide is to help frame what is required and the path to get there.

Moreover there are hidden costs to enabling citizen development. The most obvious is that individual don’t see the big picture, so may be duplicating what has already been done, may not create quality apps, interface design will likely not be consistent with the other applications, making it harder for other users.

The fragmentation challenge

A particularly important point is that development pushing out to end-users almost inevitably creates fragmented systems, with a proliferation of apps hard to integrate with existing platforms.  

This makes it harder to create unified digital experiences, and risks to data integrity unless clear measures are in place on data access and storage.

Technology governance for transformation

As we have already learned over many years already, the more technology development is put in the hands of end-users, the more we governance structures and oversight are needed. This is very obviously required for security as well as maintaining the integrity of enterprise systems.

The challenge is to establish this in a way that enables the power of citizen development while keeping effective enterprise system structures. As I often term it, “governance for transformation“.  

It is hard to get right, but the rewards of doing this well are massive, creating an incredibly agile, innovation organization that is stable as it rapidly evolves. 

 

]]>
https://rossdawson.com/democratization-software-development-organizations/feed/ 0
David Autor on the design of how we use AI and work polarization https://rossdawson.com/david-autor-on-the-design-of-how-we-use-ai-and-work-polarization/ https://rossdawson.com/david-autor-on-the-design-of-how-we-use-ai-and-work-polarization/#respond Fri, 11 Aug 2023 03:03:31 +0000 https://rossdawson.com/?p=23038 MIT professsor David Autor is one of the leading labor economists in the world and expert on the impact of technology on work. I have frequently referenced his work, notably on the polarization of work

An interview in Financial Times shares his perspectives on the role of AI in work. As Autor emphasizes and I have been saying for many years, the issue is in the design of work and the economy, and the mental models we apply to how we do that.

The whole interview is worth reading, here are a few excerpts.

“I think AI is going to reduce the bottleneck of expertise in some areas, but that can complement others. There are many paths where you have foundational judgment, acquired through experience or training, bounded by some upper bound of technical or specific knowledge… The good case for AI is where it enables people with foundational expertise or judgment to do more expert work with less expertise. 

We do see reduced hiring at firms adopting AI in some of the tasks that AI is good for — information processing, some software coding, decision-making tasks. But I don’t think that is in any sense a full description of what’s going to occur. It’s actually a challenge of job design to figure out how we reallocate and redesign work, given the tools we now have available. This often takes a long time to figure out.

The question we should be concerned about is not the number of jobs. We have a labour shortage throughout the industrialised world. I am concerned about the number of jobs in Mumbai, but in the UK, in the US and northern, western Europe, we are running out of workers. The concern we should have is about expertise. If people are doing expert work that pays well and now they have to do generic work that pays poorly, that’s a concern. It’s the quality of jobs, not the quantity. The problem is technology can make some expertise much, much more valuable, but in other cases, it directly replaces expertise we already have.

Technology can be both very helpful or very harmful. It’s helpful to the degree it complements expertise and makes people’s skill set more valuable by allowing them to do more with it. It’s harmful to the degree that it takes skills that we’ve invested in that are the basis of our livelihood and makes them so abundant that they aren’t worth anything any more. I think we have a real design choice about how we deploy AI. It is so flexible, broadly applicable and malleable that we can do lots of stuff with it, some quite good, some quite bad, and depending on the mental model we have in mind of what we’re trying to do, we will accomplish different things.

If we could reinstate the value of mass expertise by enabling people to do more in the trades, in healthcare, in contracting construction, in even some of the writing tasks that we do, that would be spectacular. If it comes at the expense of making some elite expertise less scarce I think that’s ok. I don’t think people with PhDs and MDs and JDs are going to be wiped out. They may just not see the same year over year wage growth that they’ve seen over the last several decades. That’s ok. They’ve had a good run and they’ll be fine.”

]]>
https://rossdawson.com/david-autor-on-the-design-of-how-we-use-ai-and-work-polarization/feed/ 0
The massive bust in virtual event platforms and what comes next https://rossdawson.com/massive-bust-virtual-event-platforms-hopin-bluejeans/ https://rossdawson.com/massive-bust-virtual-event-platforms-hopin-bluejeans/#respond Wed, 09 Aug 2023 11:55:56 +0000 https://rossdawson.com/?p=23031 The last few days have marked the massive bust in virtual events: Verizon closed down BlueJeans, Run The World bought by EventMobi, and Hopin’s event management platform was acquired by RingCentral.

Hopin was valued at $7.6 billion and now appears to be valued at around $400 million.

Verizon paid $500 million which is now effectively written off.

Run the World raised $15 million including from Andreessen Horowitz. While a sale price wasn’t disclosed Techcrunch noted that there were 500 events recently listed, down from 15,000.

What went wrong?

Not dissimilarly to the dramatic back-to-office shift after Covid remote work, now that we are able to do events in person, demand for virtual events has plummeted.

Akin to “travel revenge”, everyone is keen to get back to in-person events. My professional speaking was for two years entirely virtual. This year I have been back to travelling around the world, with only one remote presentation all year.

But another important point is that none of these platforms ever created a compelling virtual event experience. They were all largely based on physical event metaphors, and while there a handful of interesting innovations, none brought them far beyond a glorified video call.

Will we get to a point where we can have engaging virtual events in the Metaverse, using avatars and next-generation glasses?

Probably, but the timeline on this, including people becoming comfortable with the new environments, is likely 5-10 years.

In the meantime virtual events will still be significant and a gradually growing proportion of overall events. But virtual event platforms are unlikely to be a massively lucrative sector. 

]]>
https://rossdawson.com/massive-bust-virtual-event-platforms-hopin-bluejeans/feed/ 0
Is San Francisco coming back? 9 factors shaping its future https://rossdawson.com/is-san-francisco-coming-back-factors-shaping-future/ https://rossdawson.com/is-san-francisco-coming-back-factors-shaping-future/#respond Thu, 27 Jul 2023 20:21:48 +0000 https://rossdawson.com/?p=23015 For a couple of decades San Francisco has been my ‘second city’ after Sydney, where I’ve gotten to know the city, built a network of fascinating people, and run a number of conferences. 

After a break during Covid, I’ve been back briefly 4 times in the last 8 months, usually en-route to US speaking gigs, catching up with people, going to events, and getting a bit of a feel for the city post-pandemic.  

In my conversations here I’ve experienecd deep division of opinions on the state and future of San Francisco, with some seeing it as well past its peak following an exodus over the last years (“it’s a shithole”), while others believe it is in the early stages of a renaissance (“everyone’s coming back”).

As a visitor I don’t personally experience the issues facing the city to any significant degree, but it is interesting to reflect on some of the factors and how those may play out.

Knowledge networks, especially in AI. The Bay Area has long been the global leader in tech startups, substantially due to the intensity of world-leading expertise in the region. Those on the edge learn from each other, and want to be here to keep ahead. With AI transforming the world and AI capabilities centered on San Francisco, people are coming back to town.

Reversion of remote work. Many people left the city when remote work was possible and encouraged. Now many tech and other companies are enforcing back-to-office, resulting in people moving back to the city. However this ismight be a short-term trend. The degree to which tech companies hire remote or local workers will fundamentally shape the city.

San Francisco vs Silicon Valley. During the 2010s I observed a substantial shift in the center of gravity of tech from mid to lower Silicon Valley to San Francisco. All the major tech companies located in the Valley set up SF offices, some relocated, and many new companies set up in the city. Not least in the reasons was that young dynanic people wanted to live in a vibrant city, not in the ‘burbs. The center of gravity could shift back, but the city is still vibrant, with most major AI companies in San Francisco. Venture capital is equally available across the Bay Area, though some investors prefer or mandate that companies are in the region. 

Inequality, crime, drugs, and homelessness. The city’s social problems are legion, impacting the poor and making some parts of the city dangerous and unpleasant. Extreme wealth disparities inevitably create deep challenges, and there is little prospect of these easing. San Francisco apparently has the lowest proportion of young children of any city in the US, with parents choosing to live elsewwhere.

Real estate prices and cost of living. Many have already left the city due to the high cost of housing, though this applies across most of the Bay Area. As seen in other cities, the cost of living could drive out more of the creative community that helped to make San Francisco the colorful, dynamic city that it has been for many decades.

Politics. In short, there is much current or future city governments could do to improve the city, but it is highly uncertain whether they will be effective.

Transport. Improved transport could enable people to live outside the city but work in San Francisco, balancing quality of life and work demands.  The prospects for dramatically better public transport in the foreseeable future are low. However autonomous car or buses could allow low-cost transport for many.

Regulation and tax. A number of companies have relocated out of San Francisco and California due to higher taxes and a greater regulatory burden. The advantages of being located in the city, such as access to talent and tech ecosystems, need to outweigh these costs.

Community and values. There are extremely strong communities in San Francisco, many building strong bonds through doing ventures together. Many people have aligned values around positive social impact and expanding consciousness, with these communities sometimes harder to find in other US cities. 

Personally I have found it exceptionally intellectually stimulating being here in recent visits and I intend to get back more. It still feels like a second home to me, so I hope that it prospers.

]]>
https://rossdawson.com/is-san-francisco-coming-back-factors-shaping-future/feed/ 0
Argument analysis of Andreessen’s ‘AI Will Save the World’ article https://rossdawson.com/argument-analysis-andreessen-ai-will-save-the-world/ https://rossdawson.com/argument-analysis-andreessen-ai-will-save-the-world/#respond Wed, 26 Jul 2023 06:07:45 +0000 https://rossdawson.com/?p=23003 I’m in San Francisco at the Internet Archive (one of the most wonderful artefacts in the history of the web – read about it), attending an AI Knowledge Mapping  Hackathon run by Society Library Founder Jamie Joyce, The event supports the building of a knowledge graph of public debates. This event focused on Marc Andreessen’s recent famous AI Will Save the World article, adding human-generated arguments against every sentence and AI-excerpted proposition.

It’s a great exercise. When I first read the article and listened to the accompanying podcast, I found myself agreeing with just about everything, but it still left me deeply unconvinced on some aspects of the piece. So it’s been good to go back to it in depth. 

We were given a spreadsheet containing all 320 sentences in the article, as well as a list of 128 summarized general claims, and invited to providing supporting, refuting, or refining arguments. Here is the argument spreadsheet if you would like to add any arguments yourself! 

Here is part of what I contributed to the debate mapping, commenting on Andreessen’s statements.

“AI will not destroy the world, and in fact may save it.”

The first part of this sentence is a statement of belief. In fact very little the article directly supports this statement. 

The second part of the statement is not directly supported in the article. however it extensively describes many of the ways that AI could have substantial positive social benefit. Which is not the same thing as saving it.

The fact that there can be substantial positive impact in a variety of domains does not, in fact, repudiate the potential for AI destroying the world.

Much of the article argues that there is a moral panic about AI, with many of the major protagonists benefiting from this panic. Andreessen himself writes that:

“it’s not that the mere existence of a moral panic means there is nothing to be concerned about.”

He then goes on to focus on the fact that we have an AI moral panic, which is probably a fair assessment but does not validate his broader case.

One of the most pointed statements Andressen makes is:

“The claim that the owners of AI will steal societal wealth from workers is a fallacy.”

There are a few problems with this. It is an assertion with little subsequent substantiation.

“Steal” is an emotive word and an active verb. Even if there were no intent to take wealth from workers (a generous assumption), that could happen naturally due to an array of factors. In fact recent U.S. labor share of GDP has decreased substantially from any time prior to 2008.  

Given the extraordinary economic and social value that Andreessen professes AI will give us, it would be extraordinary if the companies that own the most used AI will not accrue a very high proportion of economic value. 

I personally agree with Andreessen that we are likely to have strong employment into the indefinite future. However the disruption to livelihoods and lives as existing roles are eliminated or reshaped by AI could still be brutal, and there is no evidence that the well-documented polarization of work will not continue. 

Andreessen also argues that AI will not kill us all. 

“AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave. In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.”

The specific words here are important. Andressen’s argument why AI will not kill us is founded on two concepts:

“AI doesn’t want.” However just because AI doesn’t have explicit volition doesn’t mean it won’t engage in non-huiman-aligned action.

“AI is not going to come alive.” This is in fact highly debatable, and has no direct bearing on whether AI may kill us all. 

As such these are by no means solid arguments.

These points I’ve addressed above are where Andreessen’s article is weakest. I do agree with almost all points that he makes. But the things he has justified the most thoroughly in the piece are not in fact his central points, which he essential presents as assertions.

I and others consider these assertions to be (highly) questionable. 

]]>
https://rossdawson.com/argument-analysis-andreessen-ai-will-save-the-world/feed/ 0
The Six Facets of the Singularity https://rossdawson.com/the-six-facets-of-the-singularity/ https://rossdawson.com/the-six-facets-of-the-singularity/#respond Tue, 18 Jul 2023 09:51:35 +0000 https://rossdawson.com/?p=22986 I first came across the concept of the Singularity a few decades ago. I was intrigued but sceptical on a number of fronts. There seemed to be some massive and highly questionable assumptions behind it all. 

Yet a belief in the concept of accelerating returns in all its guises has been central to my life, and you certainly can’t discard the idea of the Singularity. The inevitability of it happening as described is more debatable.

However ssince November 30, 2022, when ChatGPT was launched, many of the ideas of the Singularity are not only far more current, they have become central to discussions across many dimensions of society. 

With the advent of generative AI with all its manifold implications, it is worth coming squarely back to the idea of the Singularity.

The problem has been that whenever two people talk about the Singularity they think about it in different ways. It is not one thing.

To help disambiguate, I’ve created the Six Facets of the Singularity framework, shown below.

Some of the facets complement or fit with others, some contradict each other. The Singularity is not an integral concept. It is open to our interpretation, and indeed if we transition into and through it we will each understand the process in a different way.

These facets can be debated and discussed, but the intention of the framework is to give us a common frame to discuss the Singularity.

How does each of us relate to these different facets? Collectively that could determine our future. 

Accelerating Returns

Each technological advancement fuels further innovation, creating a cycle of exponential growth that speeds up the rate of progress.

AGI > Humans

Artificial General Intelligence could surpass human intellect in every domain, transforming society and the economy.

Transhumanism

Integrating advanced technologies with human biology could redefine humanity, radically augmenting abilities, extending lifespans, and migrating minds.

Superabundance

Technologies including AI, nanotechnology, and additive manufacturing could render traditional economic constraints obsolete, enabling a post-scarcity society.

Existential risk

Acceleration entails significant risks, potentially leading to human extinction or other catastrophic outcomes if we cannot control unprecedented technologies.

Event horizon

We may reach a point beyond which uncertainty and the pace of change exceeds our capacity to foresee or understand future developments.

Image: Midjourney

]]>
https://rossdawson.com/the-six-facets-of-the-singularity/feed/ 0