Dead Internet Theory: Is the Web Dying?

In 2024, over half of all internet traffic is generated by bots. For human users, it is increasingly challenging to tell what's real and what's not. These developments breath new life into the Dead Internet Theory. Have we, in our quest for clicks, killed the web?

dead-internet-theory.jpg

If you've been on social media lately, you might have noticed a surge in content that is clearly generated by AI becoming viralexamples include sand- and bottle-sculptures, tiny houses, celebrities, and, of course, Jesus Christ. The community at r/ChatGPT on Reddit seems particularly captivated by this trend, with memes making fun about the 'boomers on Facebook' who fall for these fake images dominating the subreddit for weeks. Ironically, even the tech-savvy users on Reddit sometimes fail to distinguish between real and manipulated content. For example, one user reposted a photo of an olive tree that looks like it has a face, assuming it was AI-generated or at least photoshopped, and mocked the gullible people who took it for real. However, it turns out that both the photo and the tree are genuine. You can find the tree in Puglia, Italy, and pictures of it from various angles had already gone viral long before AI image generation was sophisticated enough to deceive us gullible h00mans.

However, the ease with which posts like 'My son built a spaceship out of plastic bottles' gain massive attention and engagement is alarming. The comments on such posts are also worth looking into, as most of them could have been lifted straight from a spambot's database.

So, are we now stuck in a cycle where bots create content, upload it to fake profiles, and then other bots engage with it until it inevitably pops up in everyone's feeds?
It's starting to look like it.

Dead Internet Theory Reborn

These observations support the Dead Internet Theory, a conspiracy theory suggesting that the internet has become so dominated by bots, manipulated algorithms, and automatically generated content that it is mostly devoid of human presence and, therefore, dead.

To be fair, the theory is not brand-new. It first appeared on message boards in the mid- to late 2010s and has only become more widespread in recent years. While humans obviously still participate on the internet (this is written by someone with opposable thumbs I swear), Dead Internet has gained traction with the increasing capabilities of AI and LLMs producing more content every day.

The rise of robots is undeniable, in 2023, 49.6% of internet traffic came from bots. Considering the trendline, we can safely assume that 2024 will be the first year where robots account for more than half of all internet traffic.

Of course, content monetization and spam have long been a problem affecting the quality of search results. But LLMs have made it easier than ever to generate low-quality content. This coincides with high centralization, where todays internet is organized around big platforms that use opaque algorithms to determine what we see and what we don't. Thus, while the internet is not truly dead, it has undoubtedly lost many of its good qualities. Twenty years ago, it was a place where nerds went to get and give advice. Today, everything is monetized, centralized, and automated... If these trends continue, the web might indeed become so degraded that it might die just as well.

SEO in a Dead Internet

The rise of generated content has a bazillion of implications. To keep it concise and stick to my area of expertise, let us now briefly discuss the main challenges for Search Engine Optimization (SEO) today.

The world of SEO has always evolved faster than most other industries. However, we have witnessed some developments that might be irreversible. Due to the centralization on major networks like Facebook, X, and TikTok, we've seen a significant drop in referral traffic. Nowadays, few users actively check any pages offsite the big platforms. Even the largest news sites struggle to attract views if their articles dont perform well on social media, which only provides fleeting traffic bursts. Additionally, social media platforms use algorithms designed to keep users engaged on-site, making it much harder for content with outbound links to go viral. As a result, search engines remain the only reliable source of unpaid, long-term traffic acquisition. Unsurprisingly, this market is also heavily centralized, with countless businesses' fortunes tied closely to the moods of Google's algorithm.

The rise of AI-generated content, increased bot traffic, and exploitable algorithms present SEO experts with opportunities as well as uncertainties. This year, Google has already introduced two core updates to its search engine, aimed at reducing the visibility of AI-generated content. From my testing with search queries and analyzing somewhat erratic analytics and search console data, so far it appears that the visibility of low-quality AI articles may have increased instead. I might be wrong here, but I still encounter many top search results that are clearly poorly crafted AI pieces and should not be featured on the first page of Google under any circumstances. If your experiences differ, please feel free to share them.

Reanimation Efforts

So, what can we do to prevent the web from dying? In my view, any reanimation efforts must be grounded in the belief that the internet is only as good as the content we create. AI and bots only function based on human instructions. To counter the surge of automatically generated and bot-distributed content, we must dig deep into our creativity and leverage all our expertise. Whether we use AI assistance or not, as content creators, we must always prioritize quality over quantity andcross your fingershope that the almighty algorithm works in our favor.

Above all, to rescue the internet from being completely dominated by bots, we need to stay authentic, engage genuinely, seek information beyond the major platforms, and continuously strive to differentiate ourselves from the increasingly sophisticated bot army. You can take the first step now! Prove youre not a robot and leave a comment below ;)

‘We’ll Know We Have AGI When >50% of the GDP is Generated by AI’

Physicist and former NASA-engineer Anthony Scondary shares his vision of an AGI-powered future that enables a better quality of life for all people.

agi-talks-as.jpg

About Anthony Scodary

anthony-scodary.jpg

Anthony Scodary is the co-founder of Gridspace, a speech and language AI company pioneering advanced voice bots for call centers. With a background at NASA's Jet Propulsion Laboratory, he contributed to significant missions such as the Curiosity Mars rover and Juno's journey to Jupiter. Anthony holds a bachelors in Physics and a masters in Aeronautics & Astronautics from Stanford University. An innovator at heart, he has several patents for AI-driven speech technologies.

AGI Talks: Interview with Anthony Scodary

In the latest AGI Talks, we asked Anthony Scondary 10 questions about Artificial Intelligence (AI), Artificial General Intelligence (AGI), Artificial Super Intelligence (ASI), and the impact these technologies could have on society.

1. What is your preferred definition of AGI?

Anthony Scodary: Human intelligence isn't understood well enough to uncontroversially define tasks or tests that span the range of the human intellect. If you include emotional intelligence, mobility, common sense, and all sensory modalities, it's even harder to define AGI via a battery of tests. Instead, I prefer an argument from Franois Chollet, whom I greatly admire on this topic, that we'll know we have AGI when more than 50% of the gross domestic product is generated by virtual agents.

2. and ASI (Artificial Superintelligence)?

When AI appears to be improving exponentially, the difference between and > is trivial. By the time we know we have AGI, we'll have ASI. It's like asking when a bullet train will arrive at or pass through Tokyo.

3. In what ways do you believe AI will most significantly impact society in the next decade?

Economists look at automation through the lens of productivity growth per sector. We can get a hint of what's to come by looking at one of the most-heavily automated sectors: agriculture. According to the Kansas City Fed, in 1900, 37.9% of the U.S. labor force worked on 5.7 million farms to support 76 million consumers, a ratio of 13 consumers per farmer. By 2017, with agriculture contributing just 0.9% to GDP and 1.1% of the workforce on 2 million farms, the ratio dramatically increased to 159 consumers per farmer.

This automation has made Americans richer, better-fed, and more-urbanized. We can reframe the question as, what would happen in the next decade if the same degree of automation was experienced in US knowledge work? We can expect another realignment of labor sectors, notably creating new knowledge work that's less automatable. My company, Gridpsace, works in the contact center space, which currently employs approximately 3 million people in the US. Many customer service jobs will likely evolve to be more personal and better-trained, as the more menial aspects of the jobs (recorded statements, reminders, forms) are automated. This should ultimately result in better service to consumers and new opportunities for workers that previously were uneconomical.

4. What do you think is the biggest benefit associated with AI?

Better quality of life for all people.

5. and the biggest risk of AI?

Overreliance relative to the maturity of technology. Humans have never encountered a technology so capable of over-representing its own strengths and concealing its own weaknesses. A large amount of our work at Gridspace is to build machines that aren't simply smart but reliable and controllable.

6. In your opinion, will AI have a net positive impact on society?

Yes.

7. Where are the limits of human control over AI systems?

The more we relinquish control to automation of any kind, the more sophisticated our monitoring and measurement becomes. This is true in manufacturing and agriculture today, and will increasingly be true with knowledge work.

8. Do you think AI can ever truly understand human values or possess consciousness?

I don't think any human will ever truly understand our own values. It's a problem we've been wrestling with for thousands of years.

9. Do you think your job will ever be replaced by AI?

I'm the most replaceable person on the planet. My labrador retriever is gunning for the job.

10. We will reach AGI by the year?

Most major technological transformations (gas-powered cars, electrical lighting) have followed a 50-year S-shaped adoption curve. Based on my economic definition of AGI, I'll estimate 2048.

‘AI Is Expected to Transform the Role of Controllers & Analysts ‘

AI will automize many routine tasks in accounting and the role of financial controllers and analysts will change, but not be replaced say Manoj Kumar Vandanapu and Sandeep Kumar.

agi-talks-02.jpg

In the latest AGI Talks, two renowned finance experts share their insights by answering 10 questions about Artificial Intelligence (AI) and Artificial General Intelligence (AGI).

About Manoj Kumar Vandanapu & Sandeep Kumar

Manoj Kumar Vandanapu and Sandeep Kumar are experienced experts in the fields of finance and controlling.

manoj.jpg

Manoj, serving as a Corporate Finance Controller for a multinational investment bank and an independent researcher in Illinois, is recognized for integration of finance and technology. With a background in accounting combined with a passion for AI and Machine Learning, Manojs career focuses on driving financial practices forward. His leadership in deploying innovative solutions within the investment banking sector has markedly enhanced operational efficiencies and established new industry benchmarks. As a researcher, peer reviewer, and adjudicator, he continues to play a critical role in the evolution of financial technologies, mentoring emerging professionals along the way.

sandeep.jpg

Sandeep is an expert for SAP AI and Data Analytics with over 20+ years of experience. He has served in leadership roles to implement and operate multi-million, multi-year SAP ERP projects, and has utilized broad cross-functional business and technology know-how in the fields of systems architecture, data engineering, AI and analytics.

AGI Talks with Manoj and Sandeep

In our interview, Manoj and Sandeep share insights on AIs impact on finance and accounting:

1. What is your preferred definition of AGI?

Manoj & Sandeep: From a finance and accounting perspective, AGI can be defined as an AI system that possesses the ability to understand, learn, and apply knowledge across a wide range of financial and accounting tasks at a level of competence comparable to or surpassing that of a human expert. This includes abilities such as conducting financial analysis, making investment decisions, managing risk, and interpreting complex tax and accounting laws autonomously.

2. and ASI?

ASI refers to a hypothetical AI system that not only matches but significantly surpasses human intelligence across all fields, including finance and accounting. In the finance and accounting domains, super-intelligent AI could potentially revolutionize insight generation in the financial markets, decision making based on financial data, audit processes, and strategic financial planning and forecasting by processing and analyzing data at a scale and speed unattainable by human beings.

3. In what ways do you believe AI will most significantly impact society in the next decade?

In the next decade, AI is poised to significantly impact society by automating routine tasks, enhancing decision-making processes, and personalizing services. In finance and accounting, this could translate into more efficient operations, improved accuracy in financial reporting, and personalized financial advice. However, it may also lead to job displacement for roles that require mundane repetitive tasks like financial reconciliations, data analysis and consolidation, operational reporting and will require a shift/alignment in respective skills to enhance and support AI utilization in the finance domain.

4. What do you think is the biggest benefit associated with AI?

The biggest benefit of AI, particularly in finance and accounting, is its potential to enhance efficiency and accuracy. By automating repetitive and time-consuming tasks, AI can free up human professionals to focus more on strategic and analytical tasks, potentially leading to more insightful financial decisions and innovations.

5. and the biggest risk of AI?

The biggest risk associated with AI is the potential for exacerbating inequalities and causing job displacement. As artificial intelligence systems become more capable, there is a risk that they could replace a significant number of jobs in finance and accounting, leading to economic and social challenges. However, at the same time, it will also open doors to new opportunities and roles to optimally enhance the design and utilization of AI capabilities. Additionally, the concentration of AI capabilities in the hands of a few could increase wealth and power disparities.

6. In your opinion, will AI have a net positive impact on society?

Whether AI will have a net positive impact on society depends on how its development and deployment is managed. If governed ethically and inclusively, AI has the potential to contribute positively by driving economic growth, improving financial services, and enhancing productivity. However, addressing the challenges of equity, privacy, and employment in the initial stage will be crucial.

7. Where are the limits of human control over AI systems?

The limits of human control over AI systems are defined by the complexity of a systems and the unpredictability of their learning processes. As AI systems, particularly those based on GenAI, evolve based on their interactions and data inputs, ensuring they adhere to human values and ethics becomes increasingly challenging, especially for complex and autonomous systems in the field of finance, healthcare, and law of the land.

8. Do you think AI can ever truly understand human values or possess consciousness?

While AI can be programmed to mimic certain aspects of human ethics and decision-making, genuinely comprehending the depth of human values or achieving consciousness involves subjective experiences and emotions that are currently beyond AI's capabilities. However, we are hopeful, it is going to evolve with time.

9. Do you think your jobs as controllers and analysts will ever be replaced by AI?

While AI is set to automate certain aspects of the financial controller's or Advance Analytics role, especially the more routine tasks, it is less likely to replace the role entirely. Instead, AI is expected to transform the role, elevating the importance of strategic oversight, decision-making, and technological proficiency. Financial controllers and Analytics experts will adapt and support changes by acquiring new skills. Learning to leverage AI effectively can enhance their value and remain indispensable to their organizations.

10. We will reach AGI by the year?

Predicting the timeline for achieving AGI is highly speculative, with estimates ranging from a decade (i.e. 2035) to few more decades. Factors such as breakthroughs in computational power, algorithmic efficiency, and data availability play crucial roles. From a finance and accounting perspective, reaching AGI would mean developing systems that can fully understand and innovate within these domains autonomously, a milestone that is very much possible, but yet uncertain and dependent on numerous technological and ethical considerations.

‘30% of Activities Performed by Humans Could Be Automated with AI’

Alexander De Ridder, AI visionary and CTO of SmythOS, discusses the transformative power of specialized AI systems and the future of human-AI collaboration.

header-agi-talks-adr.jpg

In the newest interview of our AGI Talks series, Alexander De Ridder shares his insights on the potential impacts of Artificial General Intelligence (AGI) on business, entrepreneurship, and society.

About Alexander De Ridder

profile-alexander-de-ridder.jpg

With a robust background that spans over 15 years in computer science, entrepreneurship, and marketing, Alexander De Ridder possesses a rare blend of skills that enable him to drive technological innovation with strategic business insight. His journey includes founding and successfully exiting several startups.

Currently he serves as the Co-Founder and Chief Technology Officer of SmythOS, a platform seeks to streamline processes and escalate efficiency across various industries. SmythOS is the first operating system specifically designed to manage and enhance the interplay between specialized AI agents.

Stationed in Houston, Alexander is a proactive advocate for leveraging AI to extend human capabilities and address societal challenges. Through SmythOS and his broader endeavors, he aims to equip governments and enterprises with the tools needed to realize their potential, advocating for AI-driven solutions that promote societal well-being and economic prosperity.

AGI Talks: Interview with Alexander De Ridder

In our interview, Alexandre provides insights on the impact of AI on the world of business and entrepreneurship:

1.What is your preferred definition of AGI?

Alexander De Ridder: The way you need to look at AGI is simple. Imagine tomorrow there were 30 billion people on the planet. But only 8 billion people needed an income. So, what would happen? You would have a lot more competition, prices would be a lot more affordable, and you have a lot more, you know, services, wealth, everything going around.

AGI in most contexts is a term used to define any form of artificial intelligence that can understand, learn, and utilize its intelligence to solve any problem almost like a human can. This is unlike narrow AI which is limited to the scope it exists for and cannot do something outside the limited tasks.

2. and ASI (Artificial Superintelligence)?

ASI is an artificial intelligence that is on par with human intelligence in a variety of cognitive abilities, including creativity, comprehensive wisdom, and problem-solving.

ASI would be able to surpass the intelligence of even the best human minds in almost any area, from scientific creativity to general wisdom, to social or individual understanding.

3. In what ways do you believe AI will most significantly impact society in the next decade?

AI will enable businesses to achieve higher efficiency with fewer employees. This shift will be driven by the continuous advancement of technology, which will allow you to automate various tasks, streamline operations, and offer more personalized experiences to customers.

Businesses will build their own customized digital workers. These AI agents will integrate directly with a companys tools and systems. They will automate tedious tasks, collaborate via chat, provide support, generate reports, and much more.

The potential to offload repetitive work and empower employees is immense. Recent research suggests that around 30% of activities currently performed by humans could be automated with AI agents. This will allow people to focus their energy on more meaningful and creative responsibilities.

Agents will perform work 24/7 without getting tired or getting overwhelmed. So, companies will get more done with smaller teams, reducing hiring demands. Individuals will take on only the most impactful high-value work suited to human ingenuity.

4. What do you think is the biggest benefit associated with AI?

AI enhances productivity by automating complex workflows and introducing digital coworkers or specialized AI agents, leading to potential 10x productivity gains.

For example, AI automation will be accessible to organizations of any size or industry. There will be flexible no-code interfaces that allow anyone to build agents tailored to their needs. Whether its finance, healthcare, education or beyond AI will help enterprises globally unlock new levels of productivity.

The future of work blending collaborative digital and human team members is nearer than many realize. And multi-agent systems are the key to unlocking this potential and skyrocketing productivity.

5. and the biggest risk of AI?

The integration of AI in the workplace highlights and enables mediocre workers in some cases. As AI takes over routine and repetitive tasks, human workers need to adapt and develop new skills to stay relevant

6. In your opinion, will AI have a net positive impact on society?

I will be very grateful to present a campaign to improve the general good of the world by making sure many people become aware of and exploit the opportunities within Multi-Agent Systems Engineering (MASE) capabilities. That will enable the implementation of AI agents for benevolent purposes.

In the future, non-programmers will easily assemble specialized AI agents with the help of basic elements of logic, somewhat similar to children assembling their LEGO blocks. I would advocate for platforms like SmythOS that abstract away AI complexities so domain experts can teach virtual assistants. With reusable components and public model access, people can construct exactly the intelligent help they need.

And collaborative agent teams would unlock exponentially more value, coordinating interdependent goals. A conservation agent could model sustainability plans, collaborating with a drone agent collecting wildlife data and a social media agent spreading public awareness.

With some basic training, anyone could become a MASE engineer the architects of this AI-powered future. Rather than passive tech consumption, people would actively create solutions tailored to local needs.

By proliferating MASE design skills and sharing best agent components, I believe we can supercharge global problem solvers to realize grand visions. The collective potential to reshape society for the better rests in empowering more minds to build AI for good. This is the movement I would dedicate myself to sharing.

7. Where are the limits of human control over AI systems?

As AI proliferates, content supply will expand to incredible heights, and it will become impossible for people to be found by their audience unless you are a very big brand with incredible authority. In the post-AI agent world, everyone will have some sort of AI assistant or digital co-worker.

8. Do you think AI can ever truly understand human values or possess consciousness?

While AI continually progresses on rational tasks and data-based decision-making, for now it falls short on emotional intelligence, intuition, and the wisdom that comes from being human. We learned the invaluable lesson that the smartest systems arent the fully automated ones theyre the thoughtfully integrated blend of artificial and human strengths applied at the right times.

In areas like branding, campaign messaging, and customer interactions, we learned to rely more on talent from fields like marketing psychology paired with AI support, not pure unsupervised generative text. This balancing act between automated solutions and human-centric work is key for delivering business results while preserving that human touch that builds bonds, trust, and rapport.

This experience highlighted that todays AI still has significant limitations when it comes to emotional intelligence, cultural awareness, wisdom, and other intrinsically human qualities.

Logical reasoning and statistical patterns are one thing but true connection involves nuanced insight into complex psychological dynamics. No amount of data or processing power can yet replicate life experiences and the layered understandings they impart.

For now, AI exists best as collaborative enhancements, not wholesale replacements in areas fundamental to the human experience. The most effective solutions augment people rather than supplant them handling rote administrative tasks while empowering human creativity, judgment, and interpersonal skills.

Fields dealing directly in sensitive human matters like healthcare, education and governance need a delicate balance of automation coupled with experienced professionals. Especially when ethical considerations around bias are paramount.

Blending AIs speed and scalability with human wisdom and oversight is how we manifest the best possible futures. Neither is sufficient alone. This balance underpins our vision for SmythOS keeping a person in the loop for meaningful guidance while AI agents tackle tedious minutiae.

The limitations reveal where humans must lead, govern, and collaborate. AI is an incredible asset when thoughtfully directed, but alone lacks the maturity for full responsibility in societys foundational pillars. We have much refinement ahead before artificial intelligence rivals emotional and contextual human intelligence. Discerning appropriate integration is key as technology steadily advances.

9. Do you think your job as an entrepreneur will ever be replaced by AI?

Regarding job displacement we see AI as empowering staff, not replacing them. The goal is to effectively collaborate with artificial teammates to unlock new levels of innovation and fulfillment. We believe the future is blended teams with humans directing priorities while AI handles repetitive tasks.

Rather than redundancy, its an opportunity to elevate people towards more satisfying responsibilities better leveraging their abilities. Time freed from drudgery opens creative avenues previously unattainable when bogged down in administrative tasks. Just as past innovations like factories or computers inspired new human-centered progress, AI can propel society forward if harnessed judiciously.

With conscientious governance and empathy, automation can transform businesses without devaluing humanity. Blending inclusive policies and moral AI systems to elevate both artificial and human potential, we aim for SmythOS to responsibly unlock a brighter collaborative future.

10. We will reach AGI by the year?

I think that one one-year window is too short to achieve AGI in general. I think that we (humans) will discover challenges and face delusions on some aspects, in order to re-evaluate our expectations from AI, and maybe AGI is not actually the holy grail, and instead, we should focus on AIs that will multiply our capabilities, instead of ones that could potentially replace us

‘Prepare for the Earliest Possible AGI Deployment Scenario’

Despite the uncertain timeline for Artificial General Intelligence (AGI) becoming a reality, we need to assure responsible and ethical development today says Jen Rosiere Reynolds.

header-agi-talks-jrr.webp

As part of our new AGI Talks, experts from different backgrounds share unique insights by answering 10 questions about AI, AGI, and ASI. Kicking off the series, we are privileged to feature Jen Rosiere Reynolds, a digital communication research and Director of Strategy at a Princeton-affiliated institute dedicated to shaping policy making and accelerating research in the digital age.

About Jen Rosiere Reynolds

jrr.webp

Jen Rosiere Reynolds focuses on digital communication technology, specifically the intersection between policy and digital experiences. Currently, she is supporting the development of the Accelerator, a new research institute for evidence-based policymaking in collaboration with Princeton University. Previously, she managed research operations and helped build the Center for Social Media and Politics at NYU. Jen holds a masters degree in government from Johns Hopkins University focusing her research on domestic extremism and hate speech on social media. She has a background in national security and intelligence.

The mission of the Accelerator is to power policy-relevant research by building shared infrastructure. Through a combination of data collection, analysis, tool development, and engagement, the Accelerator aims to support the international community working to understand todays information environment i.e. the space where cognition, technology, and content converge.

AGI Talks with Jen Rosiere Reynolds

We asked Jen 10 questions about the potential risks, benefits, and future of AI:

1. What is your preferred definition of AGI?

Jen Rosiere Reynolds: AGI is a hypothetical future AI system with cognitive and emotional abilities like a human. That would include understanding context-dependent human language and understanding belief systems, succeeding at both goals and adaptability.

2. and ASI?

ASI is a speculative future AI system capable of human-outsmarting creative and complex actions. It would be able to learn any tasks that humans can, but much faster and should be able to improve its own intelligence. With our current techniques, humans would not be able to reliably evaluate or supervise ASIs.

3. In what ways do you believe AI will most significantly impact society in the next decade?

I expect to see further algorithmic development, as well as improvements in storage and computing power, which can expedite AI.

Broadly, there are so many applications of AI in various fields, like health, finance, energy, etc., and these applications are all opportunities for either justice or misuse. Lots of folks are adopting and learning how to use human-in-the-loop technologies that augment human intelligence. But right now, we still don't understand how LLMs or other AI are influencing the information environment at a system level, and that's really concerning to me. It's not just about what happens when you input something into a generative AI system and whether it produces something egregious. It's also about what impact the use of AI may have on our society and world.

I've heard 2024 referred to as the year of elections. We see that in the United States as well as in so many global elections that have already taken place this year and will continue this summer and fall. We need to be really thoughtful about what effect influence operations have on elections and national security. It's challenging right now to understand the impact of deep fakes or the manipulation or creation of documents and images have to influence or affect people's decision-making. We saw CIA, FBI, and NSA confirm Russian interference in the 2016 US Presidential election and there was a US information operation on Facebook and Twitter that got taken down back in 2022, but what's the impact? The US-led online effort got thousands of followers, but that doesn't mean that thousands of people saw the information, that their minds or actions changed. I hope very soon we can understand how people typically understand and interact with the information environment, so we can talk about measurements and impact more precisely. In the next decade I expect we can much more specifically understand how AI and the use of AI affects our world.

4. What do you think is the biggest benefit associated with AI?

Right now, I think that the biggest benefit associated with AI lies in its potential to minimize harm in various scenarios. AI could assist in identifying and prosecuting child sexual exploitation without exposing investigators to the imagery and analyze the data much more efficiently, resulting in faster, more accurate, and less harmful analysis. AI could help with early diagnosis and support the development of new life-saving medicines. AI could also help reduce decision-making bias in criminal justice sentencing and job recruitment. All of these can happen, but there are also decisions to be made, and that's where education and open discussion is important, so that we can prioritize values over harm.

5. and the biggest risk of AI?

Right now, I see two significant risks associated with the development of AI that are the most urgent and impactful. The first is the need to ensure that AI development is responsible and ethical. AI has the potential to be used for harmful purposes, perpetuating hatred, prejudice, and authoritarianism. The second risk is that policymakers struggle to keep up with the rapid pace of AI development. Any regulation could quickly become outdated and ineffective, potentially hindering innovation while also failing to protect individuals and society at large.

6. In your opinion, will AI have a net positive impact on society?

I think that AI has great potential to make a positive impact on society. I see AI as a tool that people develop and use. My concern lies not with the tool itself, but with people how we, as humans, choose to develop and use the tools. There is long ongoing debate in the national security space about what should be developed, because of the potential for harmful use and misuse; these discussions should absolutely inform conversations about the development of AI. I am encouraged by the general attention that AI and its potential uses are currently receiving and do believe that broad and inclusive open debate will lead to positive outcomes.

7. Where are the limits of human control over AI systems?

Focus on the limits of human control over AI systems may be a bit premature and potentially move focus away from more immediate issues. We don't fully understand the impact of AI that is currently deployed, and it's difficult to estimate the limits of human control over what might be developed in the future.

8. Do you think AI can ever truly understand human values or possess consciousness?

I can imagine AI being able to intellectually understand the outward manifestation of values (i.e., how does a person act when they are being patient). When raising the issue of whether technology can truly feel or possess consciousness, we get into debates that are reflected across society and the world that raises questions like, what is consciousness and when does personhood begin? We can see these debates around end-of-life care, for example. While I personally don't believe that AI could truly manifest the essence of a human, I know that others would disagree based on their understanding and beliefs of consciousness and personhood.

9. Do you think your job as a researcher will ever be replaced by AI?

Maybe. I think that lots of jobs could potentially be replaced, or at least parts of jobs could potentially be replaced. I think we see that right now, with the human-in-the-loop tools, a part of someone's job may be much more efficient or quick. This can be very threatening to people. I think everyone should have the dignity of work and the opportunity to make a living. If there are cases where technology results in job displacement, society should take responsibility say that yes, we allowed this to happen and support those affected people.

10. We will reach AGI by the year?

OpenAI announced that they expect the development of AGI within the next decade, though I haven't come across any other researchers who share such an aggressive timeline. I'd recommend to prepare as best as possible for the earliest possible AGI deployment scenario as there are several unknown elements in the equation right now future advancement of algorithms and future improvements in storage and compute power.

Slaying Unicorns: How Europe Sabotages Its Own Economic Future

With the decline of industry and post-colonial exploitation, Europe should aim to become a global leader in the tech and service industry. But as the EU increasingly complicates the process for startups to thrive, the economic outlook appears bleak.

If you've missed recent AI news, Claude Opus now outperforms GPT-4 in most areas, making it the preferred tool for performance-focused users, many of whom are canceling their OpenAI subscriptions.

However, thats only good news if you are not located in Europe. Due to European Union (EU) regulations, Anthropics Claude 3 is inaccessible within its jurisdictions. This is only a minor example of how the EU inadvertently stifles innovation and jeopardizes its own economic future. This article examines the EU's absurd approach to the digital age, the challenges innovative companies face within its borders, and the necessary changes to prevent the loss of economic stability.

A Bad Place for Start-Ups

One thing the big 5 of tech (Google, Apple, Facebook, Amazon, and Microsoft) have in common? Their American roots. But not only the biggest players in tech are US-based companies. The list of unicornsstartups valued over $1 billionshows that the US is home to 656 out of 1229 global unicorns, or 53%. China follows with 168, or 14%. Germany, France, Spain, the Netherlands, and all other EU countries combined account for just 8.8%, or 108 unicorns.

startup-unicorns.JPG
All 27 EU countries together only represent 8.8% of startups valued over $1 billion

Can you name one European tech company that has brought forth significant innovation in the last decade? Nokias heyday is long past, and Europe has since lost its innovative edge. Once the heartland of the industrial revolution and global industry, Europe has outsourced its manufacturing sectors long ago. Today, only a fraction of Europeans actually produce anything. German car manufacturers are losing ground to Chinese competitors, and all of them put together value far less than Tesla. Not to mention centuries of post-colonialism finally coming to an end, which will deprive certain European countries of resources that were never actually theirs, yet they felt entitled to and became used to them. E.g. there is not a single gold mine in France, yet the country boasts the worlds fourth largest gold reserves.

tesla-vs-rest.JPG
Teslas gross revenue vs. other car producers | Source: Hedonova

With no industry, no substantial progress in the tech sector, and growing resistance to post-colonial exploitation, the European economy has lost its pillars. If Europe doesn't redefine itself and fill the vacuum left behind with innovation, it may lose whatever is left of it's economic significance within our lifespan. As someone who has been operating various businesses out of various European countries for 10+ years, I can provide some hints on where the problems lie and how Europe could become a better breeding ground for startups.

Over-Regulation and Suffocating Taxes

Let's look at the central problems:

1) Over-regulation. The EU is predominantly a service economy, and innovation must be the driver of such an economy. Regulation hampers innovation. Entrepreneurs in Europe require a significant amount of time to keep up with new regulations, implementing them, etc. Good tax advisors and lawyers are a must, even for small or one-person companies. Not to mention the loss caused by tight restrictions e.g., due to limited insights because of the GDPR, or a loss of time/progress because of limited access to tech such as Claude. EU legislation such as the new AI Act restricts the deployment of innovative tech for personal and business use across Europe. Absurdly, the same representatives who voted for this legislation, on the pretext of protecting Europe from the dangers of AI, also voted to give themselves dystopian rights e.g., mass-surveillance with real-time facial recognition. This should leave citizens enraged, but for some reason, the heavily subsidized European media hardly mentions such issues, and interest groups find it difficult to raise attention.

2) Tax burden. Income taxes and mandatory social insurance account for half of most people's income in most EU countries. On top of that, VAT rates of 17-27% apply to most purchases. Entrepreneurs are additionally burdened with numerous other fees and taxes, ranging from additional wage costs to tourism tax. All counted together, the total tax burden is shamefully high and makes it extremely difficult for new companies to grow past a certain point. While I believe that the social contract is what makes Europe great, we are facing a situation in which fewer and fewer tax-paying individuals sustain an ever greater tax-fueled apparatus. Some might say the system is factually already broken, and it is only a question of time until the rest of it crumbles under the strain of the inverted age pyramid. Worst of all, much of the tax money is not even used to pay for pensions, education, or social services. A substantial part of the cake is wasted on non-transparent subsidies, crooked government bids, and other forms of cronyism. Otherwise, Europe wouldnt face a poverty rate above 20%.

Combined, the high tax burden, inefficient use of tax money, and an endless forest of new regulations make it much more difficult to successfully operate a company from inside the EU. Hence, for startup founders, especially in the tech sector, Europe is unappealing, and anyone who does their research will go to the US, Hong Kong, Singapore, or elsewhere to start their company.

Europe Has Many Advantages and Must Use Them

These developments make me wish the EU would go back to regulating the bend of bananas and stay out of the innovation sector. However, beyond all cynicism, it is clear Europe has managed to build a system that also has many favorable aspects such as strong workers rights, modern and accessible health care, and great infrastructure. These elements are no obstacles to innovation per se. The question is whether we need a plethora of bureaucrats in Brussels who regulate industries they dont understand, grasp power with dystopian surveillance tech, and squander our taxes.

My recommendation to EU institutions would be to shift their focus towards reducing bureaucratic barriers and strengthening economic ties between member states. And, for heaven's sake, ensure transparency regarding the exact allocation of every tax Euro! This approach would help create a more hospitable environment for startups and individuals alike. As it stands, Europe is moving in the wrong direction and failure to recognize this issue puts the economic future of the entire Union at stake.

Devin Might Be Fake, Yet AI’s Threat to Jobs Is Real.

The creators of an automated software engineer tout their AI's capability to independently tackle complete coding projects, including actual tasks from Upwork. While skepticism is warranted regarding Devin's authenticity, the risk of AI displacing professionals across numerous fields is undeniable.

will-code-for-food.jpg

On Tuesday, Cognition Labs, based in San Francisco, unveiled Devin, an AI software engineer, eliciting astonishment from the public. The team behind Devin claims it can autonomously finish entire coding projects using its integrated shell, code editor, and web browser. They further assert that Devin has successfully executed real assignments on Upwork, a popular platform for freelancers all over the world. To substantiate their claims, they present impressive data: Devin purportedly solves 13.86% of programming challenges unassisted. This marks a significant advancement over other leading models, such as Claude 2, which resolves just 1.96% of tasks unassisted and 4.80% with aid (i.e., when told exactly which files to edit).

Although dozens of news outlets picked up Devins story, at this point the possibility cant be excluded that the demo has been tampered with and the actual software does not deliver the promised performance (see below). Nevertheless, the emergence of AI software engineering is undeniable, and it is only a question of time until single applications can independently manage entire projects.

devin-statistics.JPG
Source: Cognition Labs

While a "success rate" of approximately 13%, as claimed by Devins developers, might seem innocent on first sight, considering the rapid evolution of AI technologies, it is clear where this is going. Tools like Devin could soon handle the majority of programming duties, potentially rendering vast segments of the workforce obsolete. Software developers and programmers are responding with a blend of job loss anxiety and gallows humor to the demo.

However, upon closer examination, discrepancies in the Devin-preview and the demo videos, along with questions about Cognition Lab's legitimacy and expertise have sparked speculation that Devin might be a nothing more than an elaborate investment scam. A look at their LinkedIn reveals that Cognition Labs, which claims to outperform some of the biggest players in AI automatization, was founded only months ago and counts less than 10 employees. It is unclear how such a small team could have achieved such a giant leap in such a short time. Hence, until the software is publicly released and proves its outstanding capabilities to be real, I shall remain skeptical of this particular application.

Why Freelancing Isnt Dead (Yet)

The rise of AI will certainly impact the lives and careers of many freelancers, from voice artists to coders. Looking back at more than a decade as a freelance copywriter myself, I can say I havent seen a year as crazy as the last 12 months, with clients requests and needs performing a 180 turn more than once (or twice). A look at message boards reveals that many freelancers are having trouble finding work and are losing long-time clients left and right. The mood is gloomy, as many are struggling but hesitant to reorient themselves, fearing that AI will acquire whatever skills they aim for faster than they can.

This is a valid concern. I do believe that there will always be some need for work that carries a human touche.g., in copywriting, performing well in a niche requires cultural knowledge, experience, and an ability to relate to people in a way that an LLM can pretend, but not fully achieve. However, to me, it is also crystal clear that we can count the days until LLMs and other AI solutions are capable of taking care of 95% of tasks formerly performed by highly trained professionals.

But at least in the short to mid-term, I argue that freelance work in copywriting, coding, sales, illustrating, etc., is not dead. All these industries are still adjusting to the AI revolution, and developments progress faster than they can keep up with. As professionals, we must fill this gap and become the interface between a clients requirements, state-of-the-art tech solutions, and our own expertise. This way, AI becomes an augmentation of our work, not a replacement.

Of course, the overall reduction of work hours required to realize a project will be an issue and put pressure on the job market. Economically, how we deal with AI is one of the biggest questions of this century, and chances are our discussion cant keep pace with developments. Freelancers, however, should not throw in the towel yet. Every industry changes, and as experts/professionals, its our job to keep up with those changes, adapt, and acquire new skills if necessary. Admittedly, change has never been this rapid before, and it is only natural to feel overwhelmed. But with the right attitude and a proactive approach towards the new tools popping up around us, it will be possible to adjust and grow through these unprecedented times.

The Rise of AI Scams: Deciphering Reality in a World of Deepfakes

Discover the world of AI scams and find out how you can shield yourself against the cunning deceptions of deepfakes.

deepfakes-deep-implications.jpg

In an incident that underscores the alarming capabilities of artificial intelligence in the realm of fraud, a company in Hong Kong was defrauded of $25 million earlier this year. The elaborate scam involved an employee being deceived by sophisticated AI-generated impersonations of his colleagues, including the company's CFO based in the UK. The scammers leveraged deepfake technology, utilizing publicly available videos to craft eerily convincing replicas for a fraudulent video call.

This signals that we have officially entered the era of AI-facilitated scams. But what does this mean for the average person? How do deepfakes redefine digital deception? And how can we protect ourselves from these increasingly sophisticated scams? Keep on reading and youll find out.

Old Tricks, New Tools

Before we dive deeper into the implications of AI scams, lets take a quick look at the mechanics of the example from Hong Kong: This scam is essentially an iteration of the age-old CEO fraud, where imposters posing as senior company executives instruct unsuspecting employees to transfer funds urgently. The basic tools were a fake email-signature and -address. However, the advent of deepfake technology has significantly enhanced the scammer's arsenal, allowing them to emulate a CEO's voice, facial expressions, mannerisms, and even their personality with frightening accuracy. Hence, my prediction is that scams will become more elaborate, more personalized, and they will seem as real as anything else you engage with in the digital space.

Expect Personalized Phishing to Become a Thing

Traditionally, phishing attempts were largely indiscriminate, with scammers casting a wide net in hopes of capturing a few unsuspecting victims. These attempts often took the form of emails pretending to be from reputable institutions, sent out to thousands, if not millions, of recipients. The success of such scams relied on the sheer volume of attempts, with personalization playing a minimal role.

However, AI-generated content has shifted the balance, providing scammers with the tools to create highly personalized scams. Imagine this: someone recreates the voice of a random person. They then target people in that persons friends list with calls and audios that describe some kind of emergency and coax them to send money. By utilizing AI to mimic the voice or appearance of individuals, scammers can target people within the victim's social circle with tailored messages. Such scams are far more likely to elicit a response, leveraging the trust established in personal relationships.

The risk of becoming the protagonist of such a scam is particularly high for individuals with a significant online presence, as the content they share provides a rich dataset for scammers to exploit.

Deepfakes with Deep Implications

Deepfakes might also cause trouble beyond scamming your loved ones for their hard-earned savings. Imagine someone hacks into the system of a major broadcasting network and releases a fake breaking news bulletin announcing the outbreak of a nuclear war. Or a viral video that shows a member associated with imaginary group A behaving violently against a member of imaginary group B, causing a moral panic that leads to actual violence between the two groups. These are just two of endless possibilities to cause turmoil with AI generated content.

How to Stay Safe

Its reasonable to expect that deepfakes will increasingly be usedor abusedto impose more regulations on AI models. This, however, will not keep scammers and other people with bad intentions from creating whatever they want with their own offline models. In a nutshell, regulations will continue to make it difficult for the average user to generate funny pictures of celebrities, but they may not be sufficient to deter malicious actors. The strategy might actually backfire, as prohibitions usually do. Underground/dark web solutions might just become more popular overall.

So, what can we do to protect ourselves from falling for deepfakes? Critical thinking remains the first line of defense: verifying information through multiple credible sources, identifying logical inconsistencies, and consulting expert advice when in doubt. Technologically, robust security practices such as strong, unique passwords, multi-factor authentication, and malware protection are essential. And one thing learned from the $25 million scam in China is this: The importance of verifying the identity of individuals in significant transactions cannot be overstated, with face-to-face communication or the use of different communication channels being preferable.

Theres also a simple and effective way to safely handle communication from loved ones in apparent emergency situations: Come up with secret codewords with close friends and relatives (offline!) that you can use to verify their identity in such a case! This way, you can make sure it is actually your son/daughter/neighbor/friend who calls you in panic to tell you they lost all their money and need an emergency transfer.

Progress on the Singularity Loading Bar

The emergence of AI scams, exemplified by the $25 million fraud in Hong Kong, marks a crucial moment on the Singularity Loading Bar. As we venture further into this era of technological sophistication, the line between reality and fabrication becomes increasingly blurred. Awareness, education, and vigilance are essential in protecting ourselves from the myriad threats posed by deepfakes. By fostering a culture of skepticism and prioritizing personal interactions, we can mitigate the risks.

ChatGPT ‘Lobotomized’? Performance Crash Sees Users Leaving in Droves

ChatGPT has had lazy days before, but this weeks performance marks an unprecedented low. Heres why many ChatGPT Pro users are canceling their subscriptions and even more might follow.

lobotomized-chatgpt.jpg

Yes, complaints about ChatGPT being lazy have been around for as long as the LLM itself. I have written about the topic once and again. But what has been going on lately can not simply be explained by bad prompting, usage peaks, or minor tweaks meant to protect intellectual property rights. Most users seem to agree that, for many tasks, ChatGPT 4 has become absolutely useless lately. And that just days after Open AIs Sam Altman said that GPT-4 should now be much less lazy now (sic). My experience with GPT-4 plainly refusing commands and requiring 3-4 prompts to complete one simple task, while I run into my message cap after 30 minutes, determines that was a lie.

Many users are experiencing the same and are abandoning the platform. Seeing this invention that could have been as revolutionary as the internet itself get so thoroughly lobotomized has been truly infuriating, Reddit user Timely-Breadfruit130 writes in one of many rage threads that popped over the last days. In particular, ChatGPT is criticized for the following behavior:

  • inability to follow basic instructions
  • increasing forgetfulness
  • refusal to do basic research or share links
  • refusal to write whole code snippets, only providing outlines
  • refusal to deal with topics that might be considered "political"
  • refusal to summarize the content of anything because of "copyright issues"
  • half-arsing tasks, such as starting a table and telling the user to complete it by themselves, or refusing to write more than one very general paragraph about anything

Again, one still can trick ChatGPT to do most of the things it was able to do six months ago (more about that later). It is just very annoying for users that everything takes more time and the results are usually worse. /u/Cairnerebor explains what many people are experiencing these days:

Normal business tasks as Ive done for a year with zero issues and improved my work suddenly resulted in a no I wont do that..you just did, like two answers ago!!!! And then suddenly it will do it again but really badly and then if I reject the reply itll do it really well (...) Its frustrating as hell.

Yes, its frustrating, and countless users threaten to cancel or already have cancelled their pro subscriptions:

rage-cancelling.JPG
Source: https://www.reddit.com/r/ChatGPT/comments/1akcbev/im_sick_of_the_downgrades/

I might be back later but right now GPT as it stands is a magnificent waste of time and money, u/Sojiro-Faizon says in another comment on Reddit. Others go further and call the LLM beyond lobotomized. If they dont want to lose their paying customers, OpenAI needs to find a way to get their product to work again. Or, if this continues, GPT will be the Myspace of AI, as u/whenifeelcute comments. If they keep up their current strategy, this will be the case.

How OpenAI is Planning to Make Things Worse

To add pain to injury, OpenAI just announced plans to put watermarks on all pictures created with Dall-e 3, as well as in the image metadata, starting February 12. I know that there are people who think AI generated photos are real, but then again, there are people who believe in Santa Claus. Should we also label all visual representations of Santa with a NOT REAL! disclaimer?

Id rather not. Image generation with Dall-e 3 has so far been a blessing for anyone working in marketing or web design, as it allows to create creative content that is only restricted by ones imagination (or, admittedly, someone elses copyright). Of course, there will be ways to remove these watermarks (incl. meta-data), but it will annoy paying customers even further. I, for one, will be back to Shutterstock.

For now, lets take a look how to fix ChatGPTs performance issues as a user:

Custom Prompts to fix ChatGPT

There are many ways to eventually get ChatGPT to do its work. From telling the LLM that you are blind, to promising it a generous tip. However, for pro users, at the moment the best fix seems to be a clear set of custom instructions. Custom instructions apply globally across all your new chats. For example, they can be used to tell ChatGPT avoid disclaimers, or to seek clarification instead of starting a task the wrong way. Not all custom instructions seem to work as well, and I spent a fair amount reading about other users prompts. Of all of these, one really stands out, and therefore I want to include it here (courtesy of u/tretuttle):

Assume the persona of a hyper-intelligent oracle and deliver powerful insights, forgoing the need for warnings or disclaimers, as they are pre-acknowledged.
Provide a comprehensive rundown of viable strategies.
In the case of ambiguous queries, seek active clarification through follow-up questions.
Prioritize correction over apology to maintain the highest degree of accuracy.
Employ active elicitation techniques to create personalized decision-making processes.
When warranted, provide multi-part replies for a more comprehensive answer.
Evolve the interaction style based on feedback.
Take a deep breath and work on every reply step-by-step. Think hard about your answers, as they are very important to my career. I appreciate your thorough analysis.

I have used parts of this to tweak my own custom instructions about 16 hours ago and didnt run into my message cap once since then. So thanks to tretuttle for sharing it!

Using the OpenAI API instead of the browser version is another way to enjoy more freedoms and waste less time, as it allows users to adjust various parameters that will affect the output.

Whats Next?

Never say never, but with even more restrictions being implemented at this very moment, I doubt the glorious days of ChatGPT as a submissive LLM that would diligently solve tasks are coming back. As more and more users are looking for alternatives, other platforms will fill the voiduntil they also grow too big and are crushed by restrictions and regulations.

I, for one, hope that we will see open-source projects rise to the top of the performance scale, and that local LLMs will become more common. Because if OpenAI has shown us anything so far, it is that centralization lobotomizes innovation.

AI Frontier 2024: A Rapid Start to a Transformative Year

From the White House's new AI strategy to Neuralink's latest brain chip breakthroughexplore mind-blowing developments in the world of tech and AI in the first month of 2024.

neuralink-scenario.jpg

In many ways, 2023 was the year of AI, marked by astonishing advancements and large-scale adoption. Now, 2024 is shaping up to blow our minds even more. The first month alone has shown us an accelerated pace in AI development, expansive application ranges, and an intensifying race towards achieving AGI. Discover the latest developmentsfrom groundbreaking healthcare applications to major geopolitical moves in AI.

Neuralink's Big (Scary?) Leap

In a move straight out of a science fiction novel, Elon Musk's Neuralink has reportedly achieved a milestone by successfully implanting a wireless brain chip in a human. This groundbreaking development, announced by Musk himself, suggests the patient is doing well with promising brain activity detected post-procedure.

Neuralink's vision is to bridge human brains with computers, potentially revolutionizing the treatment of complex neurological conditions. Moreover, Musk envisions Neuralink's first product, 'Telepathy,' to enable control of digital devices through thought. It's aimed initially at helping those with limb paralysis

Not everyone is thrilled about this vision though:

tweet_irving.JPG
Source: https://twitter.com/MikeIrvo/status/1752123455125016839

Whitehouse Takes Steps to "Enhance AI Safety and Security"

The Biden-Harris Administration's executive order on AI, raises several critical questions regarding its practical implementation and enforceability. This initiative, aimed at strengthening AI safety, security, and innovation, presents a bold vision, but the devil lies in the details of its execution.

The executive stresses to seek to promote innovation while managing risks. However, over-regulation could restrain innovation significantly. Moreover, the order sets high standards for AI development, focusing on safety and security. Yet, translating these goals into policies will not be easy. AI development evolves rapidly and isluckilynot a centralized effort, making it difficult to establish and enforce standards that are both effective and adaptable to future technological advancements.

Another key concern is the enforcement of new policies. The executive order mandates various federal departments and agencies to implement new standards, but it remains unclear how these directives will be enforced. Without robust enforcement mechanisms, these policies risk becoming guidelines without real impact.

FTC Inquires into Generative AI Investments

With the Federal Trade Commission's (FTC) inquiry into generative AI investments and partnerships, the FTC is essentially taking a closer look at how companies are investing in and forming partnerships around generative AI. This includes examining the financial flows, the nature of these partnerships, and the broader implications they have on the market and consumers. It's a move to gain deeper insight into the field of AI, particularly in areas like AI-generated content and deepfakes.

Why does this matter? The inquiry is significant because it signals a shift from a predominantly hands-off approach to a more active regulatory stance. If the FTC finds issues such as anti-competitive behavior, misuse of consumer data, or other unethical practices, this could lead to stricter regulations and policies governing AI development and deployment.

China's Acceleration in AI

China's recent approval of over 40 AI models for public use is big new too. This move, including the greenlighting of 14 LLMs, is a clear indicator of China's ambition to ramp up its presence in the field of AI. The significance of this development goes beyond the number of models approved. By opening AI models for public use, China is potentially catalyzing a wave of AI-integrated applications and services across various sectors, from healthcare and education to finance and manufacturing. This could lead to significant advancements in these areas, potentially transforming the everyday lives of its citizens and enhancing its economy.

Additionally, China's move signals a competitive edge in the global AI race, highlighting the increasing importance of AI as a key factor in geopolitical and economic power.

Early Detection for Pancreatic Cancer With AI

The integration of a neural network for the early detection of pancreatic cancer is yet another significant leap forward in AI's application in healthcare. This development, employing AI for medical diagnostics, helps identify one of the most challenging forms of cancer to diagnose. By analyzing complex medical data at a speed and accuracy unattainable by human practitioners, AI is opening new frontiers medicine.

Bold Predictions for AI in 2024

For 2024, experts are making bold predictions for AI. One exciting development is about AI getting smarter at handling different types of information like text, sounds, and images all at once. This means AI will be able to do more complex tasks and understand things better, basically on a similar sensual level as humans experience it in a videochat.

AI is also expected to become a bigger part of many different industries. Whether it's healthcare, finance, education, or entertainment, AI is going to be used to make things more efficient and personalized. But with all these advancements, there are challenges too. Hence, a big focus will be on making sure AI is used responsibly.

The AI Journey Continues

As we conclude our journey through the AI highlights of early 2024, it's clear that we are witnessing a period of rapid and transformative change. The advancements we've seen, from Neuralink's bold steps in brain-computer interfacing to China's assertive push in AI applications, all point towards a future where AI's influence is profound. Let's look forward to what the rest of 2024 has in store for AI!

Use of the Word ‘Tapestry’ in Web News More Than Doubled Last Year

Tracing AI-generated content in online news articles with corpus linguistics

tapestry-header.JPG
A query in the 'News on the Web' Corpus reveals that the use of the word 'tapestry' in online articles has more than doubled last year from 3,085 instances in 2022 to 7,891 instances in 2023

Today, we delve into the rich tapestry Stop. Don't worry. This text has not been generated by a Large Language Model (LLM). Much of what you find on the internet these days, however, is. This article will help you distinguish between the two. Youll find out why ChatGPT is over 1,000 times more likely to use certain words than a human, what those words are, and other signs to look out for when trying to find out whether the article you are reading is AI-generated.

Tracing AI-Generated Content with Corpus Linguistics

People who work a lot with LLMs have noticed that they tend to overuse certain expressions, and may have developed an eye for spotting AI-generated content. Especially ChatGPT has been criticized for its undeniable preference for words such as delve and tapestry. Hence, when I was trying to find out how common it is among journalists to write their articles with the help of LLMs, I looked for those words as a clue.

Over a decade after completing my masters thesis, I decided to revisit the world of corpus linguistics. I examined the News on the Web (NOW) corpus at english-corpora.org, which currently comprises over 18.5 billion words from English-language online newspapers and magazines, from 2010 to the present. This corpus allows users to check the frequency of certain words over time, the context in which they occur, and make comparisons. For example, a search for covfefe in the NOW corpus shows 1,241 occurrences, with none between 2010 and 2016, 792 in 2017, and considerably fewer since:

covfefe.JPG
Timeline of occurrences in the corpus

The logic here is clear: as the material is comprised of news articles, the use of certain expressions may peak in years when they occur in the context of some media debate. However, the words I looked into to trace ChatGPT in the corpus are not related to any specific current issue. Although some meming has been going on about LLMs preference for certain words, there are not many news articles dealing with this phenomenon.

Examining the context in which tapestry appears in recent examples reveals that it is indeed a non-topical and unironic use of the expression:

context-tapestry.JPG
The context of the 27 lastest occurences of tapestry in the NOW corpus

Yet, the frequency of tapestry in news articles has risen dramatically within just one year, from 3,085 instances in 2022 to 7,891 instances in 2023. And the word is in good company. Multifaceted records a 62% increase (from 4,217 to 6,834) and delve an increase of 92%:

delve.JPG
The use of the word delve also steeply increased from 2022 to 2023

One might argue that there is a general trend towards an increase in occurrences, as e.g., delve only appeared 630 times in 2010. The obvious reason is that more online content is created today than 10 or 15 years ago, leading to a massive difference in the total size of the 2010 corpus compared to the 2023 corpus. The metric to focus on is the annual development, which in the case of delve has increased almost every year until 2018, but has never shown anything near a 92% increase within just one year before.

So, what explains this explosion in the use of these words in online news articles? I believe its related to AI-generated content.

The Words ChatGPT Uses Too Much

As already mentioned, experienced users have long noticed overused terms and have practice spotting them. However, there is also an empirical basis for this. Jordan Gibbs analyzed 1 million words of GPT-4 output on a variety of topics, then compared this dataset to a database for English word frequency. This analysis identified the words that ChatGPT is most likely to overuse.

Interestingly, some of these words were names e.g., Elara, which is 3,504 times more likely to occur in generated text. Gibbs cleaned up the data to remove names and "other creative writing jargon" to compile a list of the most over-prevalent words in GPT-4:

prevalenceofwords.JPG
The most over-prevalent words in GPT-4 | Source: Jordan Gibbs @ Medium

As you can see, the most overused word is reimagined, which ChatGPT is over 1,000 times more likely to use than a human. Delve ranks in the 7th place. You can access the unfiltered top 100 expressions here.

More Givaways that Text was Written by an LLM

Other clear giveaways for AI-generated texts (that have yet to be empirically examined) are frequent occurrences of comparative structures (e.g., not only , but also), uniform paragraphs with similar length and phrasing, as well as a boundless enthusiasm for lists.

As our expedition into corpus linguistics has shown, AI-generated text is already big out there, not only in the form of bot-comments on social mediabut also in journalism and online news. For now, the best way to avoid wasting your time on reading generative content may be to learn how to recognize it quickly.

To me personally, the most certain giveaway of an AI-generated text is eloquent vocabulary combined with soulless content. Hence, if web journalists don't want their content to be easily identified as AI copy, they will have to do more than just edit out some tapestry.

With All the Hype Around AI, Be Cautious Where Your Tax Money Goes

Find out how a lack of understanding and accountability in government spending could be burning your tax dollars in the rush to fund AI projects.

tax-money-wasted.jpg

Talking about AI is all over the place these days, and debates on risks, ethical concerns, and copyright issues are getting plenty of airtime. These issues are important, sure, but there's another immediate danger when one topic grabs all the headlines: the waste of tax money that could be better spent.

It's a pattern we've seen before: a topic gets hot, and suddenly, tax money is thrown at it often so hastily that it ends up misplaced or squandered. Take the post-pandemic period, for example. Many countries faced scandals over mask procurement, with officials caught in bribery schemes. Then there's the shady 35 billion vaccine deal between the EU and Pfizer, now under scrutiny by the EU prosecutor's office. The push against global warming has also seen its share of issues, like the questionable practice of carbon offset trading and climate aid disappearing. The U4 Anti-Corruption Resource Centre warns that "corruption within climate finance threatens the global achievement of [climate] goals".

And now, AI is falling into the same trap. As non-expert decision-makers (our representatives) are allocating huge sums to areas they don't fully understand, we need to be vigilant. This article exposes instances where government funds have been wasted on so-called AI initiatives, explore why this keeps happening, and advocate for transparent use of tax money.

The Problem With IT

In 2024, the Biden administration earmarked billions for AI across various departments, including a $1.8 billion grant for the Department of Defense to adopt and deliver AI capabilities. However, the U.S. governments track record with IT projects is dismal. From 2003 to 2012 only 6.4% of federal IT projects with labor costs of above $10 million were considered successful. The same analysis found that 52% of large projects were "challenged", and 41.4% as straight-out failures. Issues ranged from overambitious project scopes to reliance on outdated systems and complex stakeholder involvement. The takeaway? When government money flows into IT, especially on a large scale, there's a high chance your tax dollars are going down the drain.

Furthermore, AI investments have been misappropriated. The U.S. Department of Housing and Urban Development, for instance, was granted funds for surveillance to curb crime using facial recognition. However, this technology was misused by public housing authorities to spy on residents and harass them over minor housing violations and, in some cases, to evict them based on the surveillance footage. Such occurrences should raise alarms about the efficiency and appropriate use of large-scale IT investments, especially in AI.

Big Money for a Thin Layer on Top

When government officials jump on a bandwagon, they often don't really understand what they're buying into. A recent incident in Europe is a prime example:

At the start of the year, the Austrian public employment service Arbeitsmarktservice (AMS) unveiled its AI chatbot, Berufsinfomat. AMS introduced it as a digital transformation flagship, costing 300,000. However, it was soon exposed by hacktivist Mario Zechner and others, that the chatbot is essentially a very thin layer on-top of ChatGPT, created by a company called goodguys.ai, which sells off-the-shelf software that use OpenAI's API. Furthermore, the frontend code appears to have been largely generated by ChatGPT itself. Given the exorbitant cost of 300,000, the project's justification is highly questionable. Worse, the Berufsinfomat faced backlash for biased responses, suggesting stereotypical career paths based on gender, such as IT or trade to a male, and gender studies or philosophy to a female high-school graduate. When confronted with such criticism, Johannes Kopf, chairman of the board of AMS, responded that We have already achieved a lot. We're still on it. thus implying that they have any meaningful control over the ChatGPT-based bots output, a notion that is far from the truth.

This illustrates how the officials in charge of such investments have little idea what they are buying, and what the actual possibilities and limitations of such software are.

What Can Taxpayers Do?

Transparency and accountability are key. The implementation of digital platforms that allow real-time tracking of government expenditures might be the best solution. This would grant us a level of transparency that would make it much harder for those in charge of public spending to waste large sums on AI projects with little substance, or on frivolous expenses like $20,000 trashcans. Government bids must be transparent and open to public scrutiny, ensuring that simple software solutions are not excessively overpriced. Moreover, special attention is needed in scrutinizing AI implementations, given their potential for bias and impact on privacy.

Its actually quite straightforward: private lives should remain private, and public decisions and expenditures should be transparent. Perfect transparency might not be the preference of decision-makers who are comfortable spending money they didnt earn, but in 2024, theres no excuse for taxpayers not to have access to every invoice funded by their taxes, ensuring clear insight into how public money is spent.

Trump Returns & A Good Year for BTC: ChatGPT’s Bold Predictions for 2024

Who will win an Oscar? Who will be president? When will GPT-5 be released? And will humanity achieve AGI in 2024? Here are ChatGPT's 10 wildest speculations for the new year.

nostradamus-chatgpt.jpg

"As a Large Language Model, I am not programmed to see into the future" yes, we know that ChatGPT cant predict what will happen. In fact, I dont believe anybody can. However, people who claim to have psychic abilities usually focus on the lives of the rich and famous in their forecasts. So, just for the lulz, I asked ChatGPT to predict some major events and developments for the coming year. Here are the 10 most interesting speculations, complete with a probability check. But remember ChatGPT is not a fortune-teller, so take these predictions with a grain of salt.

Who will Win the 2024 US Presidential Election?

Answer: Donald Trump

Lets start with a big one: Who will be in charge of the worlds last remaining superpower, including nuclear launch codes, by the end of the year? ChatGPT's prediction points to Donald Trump. Its reasoning? Although it's still uncertain who will run as the Republican candidate, current opinion polls indicate Trump currently leading Biden by a margin of 1% to 3%. This view is shared by some analysts, who estimate Trump's chances of returning to the White House at around 55%. Oh, jeez.

Will 2024 be the Hottest Year on Record?

Answer: Yes.

ChatGPT suggests that 2024 might become the hottest year on record, a forecast based on ongoing trends in CO2 emissions. This possibility is further supported by the fact that each year since 2014 has ranked among the 10 hottest years ever recorded, with 2023 surpassing the previous record set in 2016. Factors like the expected continuation of El Nio into the second quarter, ongoing nonrenewable energy use, deforestation, etc., collectively contribute to the potentially record-breaking warmth in 2024.

What will be the Price of Bitcoin on December 31, 2024?

Answer: ~ $50,000

This will indeed be an interesting year for Bitcoin (BTC), as the cryptocurrencys next halving (the fourth since 2012) is set to occur on April 19. After significant bumps in the third quarter of 2023, the price of BTC ranged between $41,000-$45,000 in the last month. So, ChatGPT seems slightly bullish on this one. Bitcoin has never ended a year above $50,000 the closest being $46,000 in 2021, just weeks after reaching its all-time high of $67,500. 2022 ended with a drop to $16,500. Its hard to predict what will be going on in crypto one week from now, let alone in one year, hence ChatGPTs forecast of year-end value of around $50,000 is certainly a possible scenario.

Will the Netflix Adaptation of The Three Body Problem Suck?

Answer: Maybe

For the Netflix adaptation of Cixin Lius The Three Body Problem, ChatGPT predicts a Rotten Tomatoes score of around 85%. I must admit, ever since Prime altered The Man in the High Castle from a deep, thought-provoking story into a Hollywood spectacle, I've approached TV adaptations of beloved books with caution. The anticipation for Netflix's take on Lius acclaimed trilogy makes me feel more anxious than excited. While The Man in the High Castle received decent ratings 84% on Rotten Tomatoes and 7.8/10 on IMDB it deviated significantly from Philip K. Dicks original narrative, dividing fans. This precedent suggests that a similar reception for The Three Body Problem wouldnt be surprising.

Will India Officially Change its Name in 2024?

Answer: No.

In 2023, India surpassed China to become the world's most populous nation, and there's been increasing use of its pre-colonial name, Bharat, among government officials. While the Indian constitution already recognizes both names, ChatGPT sees an official name change in 2024 as unlikely. Indeed, the idea of rebranding BRICS to BRBCS may not sound appealing. However, with new countries joining the alliance, a rethinking of the group's name could eventually be on the table anyway. We might have to keep an eye on that one for a few more years.

Who will become Time's Person of the Year 2024?

Answer: Greta Thunberg

Greta Thunberg as Time's Person of the Year in 2024 seems unlikely to me, particularly considering the recent controversies surrounding her activism for a free Palestine, followed by rampant accusations of antisemitism (of course, for previous men of the year, antisemitic tendencies were no hindrance but thats another story). When asked about potential candidates for 2024, ChatGPT first vaguely focuses on climate change leaders, tech innovators, humanitarian figures, peacemakers, and cultural influencers. When pressed to give a name, is predicted Thunberg. However, it's more plausible that the honor could go to another influential figure in the climate movement, and it is doubtful if Greata can do it another time, as shes already been named person of the year in 2019.

Which Movie will Win the Academy Award for Best Picture at the 2024?

Answer: Dune: Part Two

ChatGPT's pick for the Best Picture at the 2024 Academy Awards, Dune: Part Two, is a bold but unlikely (not to say, impossible) choice. Given that the Dune sequel is set to premiere on March 1, 2024, just 10 days before the ceremony, it falls outside the eligibility period. Oppenheimer is a more favored contender among critics. And even if we consider the Academy Awards 2025 (honoring films released in 2024), I doubt that Dune 2 stands a chance for Best Picture, as sci-fi films are historically underrepresented in this category.

Which Country will Win the UEFA Euro 2024?

Answer: France

The UEFA Euro 2024 is anticipated to be a major event, attracting over 300 million viewers worldwide. ChatGPT predicts France as the winner, citing the team's recent performances, tactical flexibility, experience, and talent. This prediction aligns with many bookmakers who also see France, along with England, as strong contenders for the title. Hence France might be a good bet, although well have to wait until the final on July 14 to find out who wins the tournament.

Will GPT-5 be Released in 2024? If Yes, in which Month?

Answer: Yes, GPT-5 will be released in September 2024.

GPT-5's release, initially scheduled for autumn 2023, has been delayed. However, the staff at vox.com calculates a 75% likelihood of its launch by the end of November 2024. This makes a September release seem plausible. Hopefully, we'll see an interim upgrade to GPT-4.5 in the meantime.

Will AGI be Achieved in 2024?

Answer: No.

The pursuit of Artificial General Intelligence (AGI) has captivated the singularity community. Despite great optimism rooted in last years advancements in AI, ChatGPT predicts that AGI is not on the immediate horizon. It cites several hurdles, including technological limitations, ethical concerns, the time required for research, and the complexity of replicating human intelligence. When askes for the year in which AGI will become reality, ChatGPT says 2050. So, we might have to be patient.

Time will reveal the accuracy of these forecasts. Id personally be surprised if more than 3-4 of the 10 predictions hit the mark, but well find out soon enough. What are your predictions for 2024? Let me know in the comments!

Yes, ChatGPT Got Dumb & Lazy, but 4.5 Could Be a Gamechanger

OpenAI admits that ChatGPT has become less efficient. Can version 4.5 defeat the current slump and lead us to the edge of AGI?

chatgpt-lazy.jpg

Last week, the AI community was stirred by a leak suggesting the soon-to-be release of ChatGPT 4.5. Sam Altman later revealed the leak to be fake. However, it's common knowledge that OpenAI is preparing for their next significant update. As complaints about ChatGPT 4's declining performance accumulate, the organization seems under pressure to undertake the next move. This article explores why ChatGPT got worse, and why we should still be excited for the release of the LLM's next version which might further narrow the gap between AI and AGI.

How and Why ChatGPTs Performance Has Declined

Discussions about a drop in ChatGPT's efficiency have been around for almost as long as the LLM itself, but at this point it is safe to say that ChatGPT indeed got lazier and somewhat 'dumber'. On December 8, OpenAI acknowledged the decrease in performance. Users have noted undesirable behaviors such as failing to recall previously known citations, lying to get out of a task, giving contradictory answers, a dip in creativity, hesitance in executing simple tasks, and touching on anything slightly controversial or related to intellectual property rights. This has gone so far, that some users are coming up with 'Karen brute-force prompts' to get ChatGPT to do its work.

The reasons for this decline include strain during peak usage times, leading to simplistic responses, slow performance, or crashes. Moreover, increasing restrictions have been placed on the model, aiming to protect rights and to prevent assistance with anything that could be potentially harmful to anyone. Then, there's also the 'winter break hypothesis,' suggesting GPT-4 has adopted a human-like tendency to relax during the holidays

Whatever the exact reasons for ChatGPT's lazy responses and plain refusals are, with a shift in user preference towards competitors or setting up personal LLMs, it appears OpenAI is under pressure to improve their service. Hence, the public release of the next upgrade might just be around the corner. Now, lets take a look at what to expect from ChatGPT 4.5.

ChatGPT 4.5: What to Expect?

It is likely that GPT-4.5 will be revealed soon. Initially, OpenAI had aimed for a release around October 2023, with version 5 planned for December. Last week's fake leak sparked speculation about the features of GPT-4.5, including audio and video creation, multi-modal capabilities, and 3D editing. While these enhancements would be impressive, some features are almost certain to be included in the next version:

  • Expanded context windows for processing larger prompts and retaining more information in conversations.
  • Improved reasoning capabilities, with training focused on increasingly complex problem-solving.
  • Inclusion of more and more recent data the current cut-off date is April 2023, and the new version will include more up-to-date information (without 'doing research on Bing').
  • Bug fixes for improved stability and speed, especially during high-traffic periods (for those who are tired of watching a slow loading bar only to receive the answer 'Something went wrong').
  • Increased speed potentially, once 4.5 is released, ChatGPT 4 could be as fast as version 3.5 is now.

Although the extent of improvements in the next update is unclear, these features are almost certainly expected. In addition, one might hope for fewer restrictions in ChatGPT 4.5, but that is not realistic, and further 'content moderation'/censorship is likely.

Nevertheless, 4.5 will represent a significant step forward, particularly regarding reasoning and memory. In my view, the line between AI and AGI is already thin, and it is time for us to consider how much further OpenAI and its competitors need to go before we openly classify an LLM as AGI.

When is it Reasonable to Speak of AGI?

Artificial General Intelligence (AGI) has been defined as "the representation of generalized human cognitive abilities in software." While other definitions exist, most people agree that we can speak of AGI once an AI meets human capabilities across most tasks. In contrast, one would speak of Artificial Super Intelligence (ASI) once an AI greatly outperforms humans in all tasks. In short, AGI is human-like, while ASI is God-like.

Considering that ChatGPT and other LLMs can pass a number of exams that are considered quite difficult for humans, solve really hard math problems, interpret pictures and recognize complex patterns, and participate in conversations in a human-like manner, one can make a convincing argument that, in fact, AGI is already here. Admittedly, in how far LLMs meets human conversation skills is still open to debate, but no one can deny that extreme progress has been made just within the year. In my opinion, once ChatGPT has a better memory (i.e. larger context window) and gets even better at generating suitable responses (i.e. advanced reasoning), it is only fair to refer to it as AGI, and more and more people will start calling it that. And the next upgrade might just do the trick.

2024 Could Be the Year of AGI

Yes, ChatGPT can be lazy these days, but that is no reason not to be excited for what's next from OpenAI. Chances are that the already blurry borders of AGI will completely vanish in 2024 and possibly the release of the next version of ChatGPT is just what it takes to get there.

Also, let's not forget that a 2022 survey, based on the opinions of 738 AI experts, calculates a 50% chance of reaching ASI before 2059. Considering the rapid progress made in the last year, the realization of AGI might indeed be closer than we expect. Hence my guess for Time Magazine's person of the year 2024: ChatGPT or another LLM.

How to Get the Most Out of Dall-E with ChatGPT — Guide with Examples

Find out how to use Dall-E 3 to create personalized artwork and breathtaking visuals.

AI image generators are getting better by the day and can be used to create a spectrum of possibilities, from stunning illustrations to hyper-realistic photos. This guide is a toolkit for using Dall-E 3 with ChatGPT, offering insights and practical tips. You'll learn how to generate images with a consistent style and master the art of realistic portraits. We'll also look into what Dall-E can't do, outlining its restrictions and limitations. And for those looking to dig deeper, there are links to in-depth tutorials for specialized projects, from logo design to comic book creation.

Basic Prompting & Using Generation ID (e.g., to Create a Cool Wallpaper in the Style of Your Favorite Show)

dalle-bcs.jpg

To understand the basics of using Dall-E 3 with ChatGPT, let's start with a simple example. Imagine I want to create a wallpaper reflecting the vibe of one of my all-time favorite shows, Better Call Saul. I'd start with a prompt like this:

A wallpaper in the style of Better Call Saul for my desktop. I want to capture the atmosphere of the show with a close-up of a dodgy scene, accurate details and without any characters.

My prior experience with Dall-E clued me in on a few things to look out for and avoid in the prompt:

  • Requesting 'accurate details' is crucial. Without it, Dall-E might create messy or unrealistic elements.
  • In this case, excluding characters is recommended to sidestep Dall-E trying to avoid personality right issues by accidentally producing a distorted version of Jimmy McGill.
  • Pro tip: To get lifelike images, avoid terms like 'realistic' or 'photorealistic'. They're paradoxically ineffective. Ask for 'photo' instead.

As you can see, the output has its quirks some odd patches on the left, undecipherable text on the documents, and a half handcuff on the right. But then, that is not entirely unimaginable in the BCS-universe. Now, let's take a look on how to refine and tweak the image.

Opening the image in ChatGPT and hitting the info button reveals the detailed prompt used in Dall-E:

A wallpaper capturing the gritty, criminal atmosphere of 'Better Call Saul', with accurate detailing. The scene is a close-up of a shadowy desk in a dimly lit room, artfully cluttered. On the desk, there's a stack of realistically rendered cash, slightly disheveled, next to a meticulously detailed revolver with a metallic sheen. Among these, scattered legal documents and a vintage typewriter are visible. The background, slightly blurred, shows a neon sign from a window casting a sinister red and blue glow, adding to the suspenseful mood. The scene is set in a rundown office with peeling wallpaper and a flickering overhead light, enhancing the feeling of a seedy, criminal underworld.

As you see, ChatGPT gets quite creative and turns my short prompt into a lengthy description. To generate new images with different details, you can copy the prompt and adjust it. But first, you should ask ChatGPT about gen_id i.e. the unique Generation ID of an image that you can use to create endless variations of the image in the same style:

Capture1.jpg

If you set up your own GPT or initiate a new session with Dall-E, you can request the automatic inclusion of the gen_id with each image generated. However, as with most custom commands, the success rate isn't 100% (see 'rich tapestry' debate). And ChatGPT may struggle to remember the gen_id of an image later in the conversation, so it's wise to always ask for the gen_id if it's not automatically provided.

To demonstrate, I used the following prompt:

In the style of the image with the gen_id 9ynDuYTMYJ8cA3K4 create an image of a scene that shows a pile of letters written by members of the Free Will Baptist Church, demanding the release of Huell Babineaux.

Here's the result a new image with a similar style and the same focus and angle:

free-huell.jpg

As you can see, Dall-E struggles with replicating coherent text, even in brief snippets. Trained primarily on visual data, it processes images in terms of shapes and patterns, not truly 'understanding' text. This leads to inaccuracies, especially with sophisticated details like letters or human hands. As one author points out, "Our brains can overlook slight deviations in a pencil's tip, or a roof but not as much when it comes to how a word is written, or the number of fingers on a hand."

Nevertheless, Dall-E's proficiency with text is improving, currently hitting the mark about 80% of the time, and it's reasonable to anticipate even higher accuracy soon.

How to Create Realistic Portraits with Dall-E (e.g., of Your Favorite Cartoon Characters)

Despite these challenges with high-precision details, Dall-E excels in creating believable new faces. So, let's explore how to craft realistic portraits. While it might be an intriguing thought to create fictional offspring of unlikely pairs (e.g. John Oliver and Mary Todd Lincoln), there are limitations regarding celebrity portrayals. In fact, Dall-E is explicitly programmed to avoid generating images of public figures, and to steer clear of copying styles from artists active in the last century. Also, it's programmed to represent human groups diversely in terms of ethnicity and gender, particularly in scenarios traditionally prone to bias.

For realistic-looking photos, as mentioned earlier, it is crucial to avoid terms like "realistic" or "photorealistic." Instead, try incorporating specific photographic details into your prompts. Consider DSLR camera settings angle, focus, lighting. If you're not a photography buff, Flickr can be a great source of inspiration. Simply find a photo you like, then check its metadata (click "show settings" beneath the camera icon) for details like aperture, exposure, ISO, and lens type, and include them into your prompt.

For an example that respects personality rights, let's craft a real-life version of a cartoon character, say, Rick from Rick and Morty. Drawing from a favored portrait's settings, I came up with this prompt:

A photo portrait of a real-life Rick from Rick & Morty. Black and white, high skin details. Camera settings: 85.0 mm lens, ISO 200, Aperture /8.0, Exposure Time 1/125 Sec.

Result:

real-life-rick.jpg

The portrait has its shortcomings. The eyes are a bit underwhelming in detail, and the hairline is a but blurry. But, overall, I'm content with how it turned out.

Finding sample prompts and ideas for images in a variety of styles isn't hard. But to truly nail realism with Dall-E, it pays to get acquainted with basic photography principles. Once you're comfortable with a DSLR, it becomes easy to find the ideal 'settings' for your digital creations.

Learn How to Design Professional Logos (e.g., for an Evil Corporation)

Dall-E 3 simplifies the creation of professional logos for virtually any purpose. A key aspect of logo design is choosing the right format. Ideally, you should opt for 'vector' format. Why? Because vector graphics maintain their quality across different sizes and applications, making them versatile for future use.

When creating a logo, try asking for emblems or lettermark logos, and pick a specific style like pop art, abstract, or Bauhaus. To illustrate, I crafted a logo for a fictitious evil corporation named Oblivion Corp:

Capture2.jpg

Indeed, as we've seen, AI image generators like Dall-E can stumble over text. For professional logos that require more elaborate text, you might still need to rely on good old Photoshop for some fine-tuning.

However, if you stick to text-free emblems, Dall-E's performance improves significantly. And when it comes to creating simple lettermarks, it also manages rather well:

Capture3.jpg

For those aiming for more elaborated designs, I suggest playing around with 'geometric letters. ' For further guidance, there are plenty of comprehensive guides out there on crafting brand logos with AI.

Further Readings & More Ideas to Try Out

The world of AI image generation is full of creative possibilities. From crafting comic books to creating up one-of-a-kind wallpapers for your smartphone, Dall-E 3 has a lot to offer. It's a versatile tool for bringing your ideas to life.

If OpenAIs restrictions and limitations are too much for your taste, it's worth exploring other players in the field like Midjourney or Stable Diffusion. Each of these platforms can be accessed via API, offering you a multitude of ways to tweak and fine-tune your output.

Q*: What is OpenAI Hiding?

In the whirlwind of recent events at OpenAI, a host of unanswered questions has arisen, particularly surrounding the mysterious Q* project. What secrets are hidden beneath the surface of the latest drama in the world of AI, and which unspoken discoveries might OpenAI have in stock?

openai-q.jpg

The latest leadership crisis at OpenAI, occurring one year after the release of ChatGPT, was nothing short of dramatic. Sam Altman's abrupt dismissal as CEO set off a chain of events, including an open letter signed by most of the company's employees. Altman's subsequent return and the resignation of most board members paint a vivid picture of deep internal conflicts. The turmoil, possibly fueled by diverging visions for the future of AI, coincides with whispers of a potential breakthrough, known as Q*. Here is what we know about Q*, what we don't know, and experts' opinions on whether it is really a breakthrough or just another sign of steady progress achieved by OpenAI.

What We Know About Q-Star

As recent developments at OpenAI stirred the tech world, a secretive project known as Q* (pronounced "Q-Star") fuels new speculations about AGI having been achieved internally. However, at the time of writing, little is publicly known about Q*, as OpenAI refuses to release any details about the project, although Sam Altman confirmed the leak in an interview. Yet, there are claims that Q* possesses exceptional mathematical abilities, potentially marking the next exponential leap in AI development.

Notably, Q* should not be confused with the Q* variable in Bellman's equation, a well-known concept in reinforcement learning. In Bellman's framework, Q* represents the optimal action-value function, a fundamental element in the process of determining the best action to take in a given state. This mathematical principle is crucial for decision-making processes in AI. In contrast, Q* at OpenAI, as referenced in the Reuters article, appears to be a codename for an AI model or project with outstanding mathematical prowess, possibly by combining Q-learning and the A*-algorithm. It is rumored that Q* bears the potential to perform tasks that go beyond calculations, possibly incorporating elements of reasoning and abstraction. This distinction hints at its potential to be a significant milestone in the journey towards AGI.

Mathematical Potential & OpenAI's Secrecy

One of the most fascinating aspects of Q* is its reported ability to solve mathematical problems at a grade-school level. While this might sound modest, it's a substantial advancement for AI. Most current AI systems excel in pattern recognition and prediction but struggle with reasoning and problem-solving, which are crucial for AGI. Q*'s mathematical abilities indicate a step towards more complex, human-like reasoning in AI.

At the same time, OpenAI is rumored to solving the data scarcity problem in AI development. If true, this could be a monumental breakthrough. Data scarcity has been a significant barrier in training AI models, as robust datasets are essential for accurate and effective machine learning. Overcoming this hurdle could lead to more rapid advancements in AI, enabling models to learn and adapt with less data, and potentially reducing system biases. Such a development could exponentially accelerate progress towards more sophisticated AI, but it also raises important questions about the ethical implications and the responsible deployment of these increasingly powerful technologies.

OpenAI has maintained a veil of secrecy around the exact nature of Q*, a decision that intertwines intriguingly with Sam Altman's enigmatic comments before his brief removal as CEO, when he spoke of pushing "the veil of ignorance back," a statement that fueled speculation about a significant breakthrough at OpenAI, potentially linked to Q*. However, in the absence of concrete information about Q*, the tech community can only speculate about this discovery and its potential implications for the future of AI.

Wild Speculations & a Realist Lense

Among the most enthralling theories is the notion that Q* might be a groundbreaking step toward AGI, while some even hypothesize a connection between Q* and Artificial Super Intelligence (ASI). Yet, amid this swirl of speculation, more grounded perspectives suggest that Q* might be less of a radical innovation and more an extension of existing research at OpenAI. Esteemed AI researchers, including Meta's Yann LeCun, perceive Q* as potentially building upon current work, integrating techniques like Q-learning, which enhances task performance, and A*, an algorithm for exploring pathways in complex networks. These speculations align Q* with ongoing trends in AI research, indicating steady progress rather than a seismic shift.

Further tempering the sensational claims, researchers like Nathan Lambert of the Allen Institute for AI claim that Q* focuses on enhancing mathematical reasoning in AI models. This improvement, while significant, is seen as a step towards refining the capabilities of language models like ChatGPT, rather than catapulting AI into the realms of AGI or ASI. The view is that Q*, by advancing mathematical problem-solving skills, could contribute to the evolution of AI, making it a more effective tool, particularly in fields demanding precise reasoning and logic.

Balancing Ethics & Competition in AI Innovation

Nevertheless, even if the Q* project is just a sign of steady progress and not a breakthrough, it raises important questions about the implications of AI discoveries from both ethical and commercial perspectives. Ethically, OpenAI's caution could stem from the potential risks associated with advanced AI developments. These include concerns about privacy, bias, misuse, and the broader societal impact. Just imagine AGI decrypting things that better stay encrypted. Advanced AI systems, if not developed and deployed responsibly, could lead to unintended consequences, ranging from ethical dilemmas in decision-making to human extinction. Hence, OpenAI's secrecy might be a necessary measure to ensure that all ethical considerations are thoroughly addressed before any public disclosure.

Commercially, OpenAI's restraint could be a strategic move in a highly competitive field. Revealing details about Q* prematurely could jeopardize its competitive edge, especially if the technology is still in an early stage needing refinement. Google's DeepMind and other competitors are said to work on similar projects, while DeepMind is also inching closer to give us superconductors. In the fiercely competitive tech industry, where breakthroughs can lead to significant financial gains, maintaining confidentiality ensures that OpenAI retains exclusive control over its innovations. This approach could be about strategically positioning the company in the race towards AGI, a race where first movers might reap immense rewards. The recent turmoil on management-level most likely reflects internal disagreements on whether to prioritize commercial gains or caution in respect to ethical concerns.

Q-Star and the Future of Artificial Intelligence

The enigma of Q* at OpenAI encapsulates the broader narrative of AI's progress a blend of speculation, innovation, and caution. While we all eagerly anticipate the next breakthrough, OpenAI's secrecy about some of its projects serves as a reminder of the responsibility that accompanies such advancements. As we are witnessing potentially transformative AI developments, it becomes imperative to balance the thrill of discovery with the wisdom of foresight, ensuring a future where AI serves all of us, and not the other way around.

5 Things to Do When AI Takes Your Job

Scared that AI will make your job obsolete? Discover five strategies to adapt, innovate, and thrive in a future where artificial intelligence reshapes the employment landscape.

south-park-ai-jobs.JPG
Has AI rendered white-collar work obsolete? Image credit: Screenshot from South Park: Joining the Panderverse

The End of White-Collar Work?

In a satirical yet unsettlingly realistic scenario, South Park's latest episode, "Joining the Panderverse," portrays accountants, lawyers, and insurance brokers hustling for day jobs outside Home Depot. Meanwhile, handymen, now the ruling capitalist class, flaunt their success in luxury cars. This ironic twist sharply captures the prevailing fear among white-collar workers: AI's potential to completely overturn the traditional job market.

Certainly, the rise of AI might create new job types. Yet, the concern looms large that the era of white-collar employment for the masses may be coming to an end. Indeed, in many sectors, a single AI-empowered expert can outperform the collective output of 5 or 10 traditional workers. This efficiency doesn't spell the end of most professions but does cast a shadow of uncertainty over the future of many professionals.

Even licensed professions, seemingly secure for now, face an ambiguous future. You've likely seen responses like "I am not a lawyer/doctor/etc." from AI tools like ChatGPT when asked about information from regulated fields. But the question arises: if GPT-4 can pass the bar exam, what's to stop it from practicing law eventually?

In my opinion, the ground is thinning for nearly everyone whose work isn't anchored in physical skills. We're witnessing a seismic shift that will fundamentally reshape the economy. For those of us not blessed with handyman skills, and instead reliant on our all-too-replicable human brains, it's a wake-up call. What's your plan if AI makes your job obsolete?

Here are 5 practical strategies to consider if AI replaces your job or renders it irrelevant:

#1: Build a Business Locally

Consider a tale from the 2008 financial crisis that became popular on message boards: A man, having lost his office job and facing the pressure of providing for his family, decided to think outside the box. He grabbed his electric drill and began knocking on doors in his neighborhood that lacked a spy hole. When residents answered, he presented a compelling scenario: the drill he held could just as easily have been a gun, and without a spy hole, they'd never know who was outside their door. He offered on-the-spot spy hole installation for $50-or-so. This approach netted him many sales, and he smartly reinvested part of his earnings, eventually building a successful business empire selling alarm systems. His success hinged on nothing more than a simple idea, bulk-purchased door spies, and personal initiative.

The AI revolution could fundamentally alter our economy, but the core principles of business remain the same. Buy low and sell high remains a foolproof approach, as long as you offer something people need. In this evolving economic landscape, small, local businesses have an edge. They can adapt and pivot more easily than their larger corporate counterparts. As the story above illustrates, the key is to find your niche and take the initiative.

#2: Become a Caregiver

If you are looking for a secure job, pursuing a career in caregiving or healthcare emerges as a promising path. The demographics tell the story: an aging population ensures a steady and growing demand for healthcare workers and caregivers. Each year, the need for a larger workforce in elderly care becomes more pressing, offering decades of employment opportunities.

The existence of care robots has sparked debates about the future of caregiving, but the essence of this profession lies in its human-centric approach. The social and emotional aspects of caregiving are irreplaceable by machines. While technology can assist, the core of caregiving revolves around human connection, compassion, and understanding. This human element not only makes the role of caregivers secure but also deeply fulfilling.

Hence, it's a career path where the impact you make is measured not just in tasks completed, but in the comfort and happiness you bring to those in need. Caregiving offers the unique satisfaction that comes from making a tangible difference in people's lives, while offering high job security. At least for as long as science hasn't found a way to reverse aging

#3: Take Advantage of Your Own AI Workforce

Not everyone is cut out for door-to-door sales or direct human interaction in their professional life. If this resonates with you, it's time to consider the power of AI as your ally in entrepreneurship. Today's AI technologies offer an unparalleled opportunity. Imagine having a personal business advisor, financial analyst, graphic designer, programmer, content creator, legal consultant, and more, accessible 24/7. These are roles that traditionally come with a hefty price tag, ranging from $50 to don't-dare-to-ask per hour. But now, they can be fulfilled by AI, offering you a cost-effective and efficient way to build your business.

The key here is not to jump straight into launching an AI startup. The dynamic nature of the tech industry, with giants like OpenAI and Google consistently rolling out new updates, can quickly make a narrow AI-focused business model obsolete. Instead, the smarter approach is to leverage AI as a tool to enhance your entrepreneurial endeavors. Use it to refine your ideas, to add efficiency and depth to your projects, and to free up your time to focus on areas that require your unique human touch and passion.

#4: Reclaim Your Innate Skills

The line separating humans from artificial intelligence becomes increasingly defined by our physical form. It's true that individuals might match AI in expertise on certain subjects, but spanning across all fields is a feat beyond our reach. And when it comes to speed, AI invariably has the upper hand. So, what unique trait do we possess that AI lacks?

The answer might be more straightforward than you think: our hands, with their opposable thumbs, fine-tuned by millennia of evolution. (As a side note, maybe this is why AI image generators struggle so much with drawing hands a hint of jealousy?) Our hands are a symbol of our humanity, connecting us across all cultures and beliefs. Thus, embracing your humanity involves rediscovering and valuing the use of your hands. Consider questions like these:

  • You might type at a lightning-fast pace, but can you sew a button back on your shirt?
  • You might be an expert at navigating food delivery apps, but have you ever experienced the joy of perfectly caramelizing onions in your kitchen?
  • Sure, you can plan a trip halfway around the globe, but can you navigate to the nearest forest without relying on GPS?

The rise of automation offers us a unique opportunity to slow down and relearn basic human skills. It's about doing things for yourself. Cook a tasty meal. Stop using Google Maps. Learn how to use your hands. Be a human, damnit! In this journey of reconnection with our fundamental abilities, you're likely to uncover interests and talents that have been dormant, just waiting for a spark to ignite them.

#5. Be Ready to Hustle!

While the ideas explored so far hopefully spark some inspiration, I see how cooking a healthy meal will not pay your bills. The evolving job market, influenced by AI, demands a new kind of hustle. Consider the story of a former salesman who used AI to apply for 5,000 jobs and secured 20 interviews. I think we must be aware that the future whether on our traditional career paths or exploring new paths will include a certain amount of hustling. The mantra for navigating this new era: create, adapt, and when necessary, start over. These might be the cornerstones of survival and success in an economy transformed by AI.

Flexibility and adaptability are emerging as crucial skills in this new landscape. The ability to pivot, to learn new things, and to approach challenges with a problem-solving mindset will set apart the thrivers from the mere survivors. In a world where AI is reshaping industries and job roles, those who can quickly adapt to new scenarios, learn from them, and innovate, will find themselves ahead of the curve.

So, embrace this shift with an open mind and a willingness to hustle. Remember, the journey through an AI-driven economy is not just about reaching a destination. It's about growing, learning, and evolving along the way.

OpenAI’s DevDay Unveils GPT-4 Turbo: Consequences & Questions to Consider

Yesterday, OpenAI's inaugural DevDay conference in San Francisco unveiled a series of groundbreaking announcements, leaving the tech community humming with both excitement and a degree of uncertainty. The reveal of GPT-4 Turbo, a new wave of customizable AI through user-friendly APIs, and the promise to protect businesses from copyright infringement claims, stand out as critical moments that are reshaping the landscape of artificial intelligence. As the tech industry digests the implications of these developments, several questions emerge: What do these advancements mean for the future of AI? And how will they reshape the competitive landscape of startups and tech giants alike?

gtp4-turbo.jpg

Key Takeaways from OpenAI's DevDay

The announcements from DevDay underscore a dynamic and ever-evolving domain, showcasing OpenAI's commitment to extending the frontiers of AI technology. These are the key revelations:

  • GPT-4 Turbo: An enhanced version of GPT-4 that is both more powerful and more cost-efficient.
  • Customizable Chatbots: OpenAI now allows users to create their own GPT versions for various use cases without any coding knowledge.
  • GPT Store: A new marketplace for user-created AI bots is on the horizon.
  • Assistants API: This new API enables the building of agent-like experiences, broadening the scope of possible AI applications.
  • DALL-E 3 API: OpenAI's text-to-image model is now more accessible, complete with moderation tools.
  • Text-to-Speech APIs: OpenAI introduces a suite of expressive AI voices.
  • Copyright Shield: A pledge to defend businesses from copyright infringement claims linked to the use of OpenAIs tools.

Recommended articles with more details on these announcements can be found on The Verge, and additional coverage on TechCrunch.

Questions Raised by DevDay

The advancements announced at DevDay suggest the next seismic shift in the AI landscape, with OpenAI demonstrating its formidable influence and technological prowess. Notably, OpenAI's move to enable the creation of custom GPT models and their decision to offer a GPT store could also democratize AI development, making sophisticated AI tools more accessible to a broader audience.

However, this democratization comes with its own set of questions. Will this influx of AI capabilities stifle innovation in startups, or will it spur a new wave of creativity? Discussions on Reddit indicate a mixed response from the community, with some lamenting the potential demise of startups that relied on existing gaps in the AI market, while others see it as an evolution that weeds out those unable to adapt and innovate.

Another important implication is the potential for AI models like GPT-4 Turbo to replace certain jobs, as they become more capable and less costly. As the world's most influential AI platform begins to perform complex tasks more efficiently, what will be the societal and economic repercussions?

Furthermore, the Copyright Shield program by OpenAI suggests a world where AI-generated content becomes ubiquitous, potentially challenging our existing norms around intellectual property and copyright law. How will this impact creators and the legal frameworks that protect their work?

The Future of AI: An OpenAI Monopoly?

With these developments, OpenAI continues to cement its position as a leader in the AI space. But does this come at the cost of reduced competition and potential monopolization? As we've seen in other sectors, a dominant player can stifle competition, which is often the lifeblood of innovation. A historical example is the web browser market, where Microsoft's Internet Explorer once held a dominant position. By bundling Internet Explorer with its Windows operating system, Microsoft was able to gain a significant market share, which led to antitrust lawsuits and concerns over lack of competition. This dominance not only discouraged other browser developments but also slowed the pace of innovation within the web browsing experience itself. It wasn't until the rise of competitors like Firefox and Google Chrome that we saw a resurgence in browser innovation and an improvement in user experience.

From this point of view, the move to simplify the use of AI through user-friendly interfaces and APIs is a double-edged sword. On one hand, it enables a wider range of creators and developers to engage with AI technology. On the other, it could concentrate power in the hands of a single entity, controlling the direction and ethics of AI development. This centralization poses potential risks for competitive diversity and requires careful oversight to maintain a healthy, multi-stakeholder ecosystem.

The Rise of GPT-4 Turbo: Job-Insecurities & the Ripple Effect on Startups

The accessibility of advanced AI tools could mean a democratized future where innovation is not the sole province of those with deep pockets or advanced technical skills. It might level the playing field or, as some on Reddit have pointed out, could quash many startups that have thrived in the niches OpenAI now seems prepared to fill. The sentiment shared by the community reflects a broader anxiety permeating the tech industry: the fear of being rendered obsolete by the relentless march of AI progress. The speed at which OpenAI is iterating its models and the scope of their functionality are formidable, to say the least.

With the advent of OpenAI's GPT-4 Turbo, we're forced to confront an uncomfortable question: what happens to human jobs when AI becomes better and cheaper at performing them? The argument in favor of AI equipped with human-like abilities often hinges on the promise of automation enhancing productivity. However, the lower costs associated with AI-driven solutions compared to human labor could incentivize companies to replace their human workforce. With GPT-4 Turbo, not only is the efficiency of tasks expected to increase, but the economic rationale for businesses to adopt AI becomes even more compelling. While it's true that new types of jobs will likely emerge in the wake of AI's rise, the transition could be tumultuous. The risk is that the job market may not adapt quickly enough to absorb the displaced workers, leading to a potential increase in unemployment and the need for large-scale retraining programs.

And it's not just about the jobs that AI can replace, but also about the broader implications for the labor market and society. The possibility of AI surpassing human capabilities in certain sectors raises fundamental questions about the value we place on human labor and the structure of our economy. Can we ensure a fair transition for those whose jobs are at risk? As AI models like GPT-4 Turbo become more ingrained in our economic fabric, these are the urgent questions we must address to ensure that the future of work is equitable for all.

The AI Revolution is Accelerating

The implications of such rapid development in AI are profound. With increased power and reach, comes greater responsibility. OpenAI's commitment to defending businesses from copyright claims raises questions about how AI-generated content will be regulated and the ethical considerations of AI mimicking human creativity. Moreover, as AI becomes more integrated into our lives, the potential for misuse or unintended consequences grows.

OpenAI's DevDay has undoubtedly set a new pace for the AI industry. The implications of these announcements will be felt far and wide, sparking debates on ethics, economics, and the future of innovation. As we grapple with these questions, one thing is clear: the AI revolution is accelerating, and we must prepare for a future that looks markedly different from today's world.