The Evolution and Coexistence of Web 2.0 and Web 3.0

Web 3.0 has grown in popularity over the last few years and has fast become a tool for the empowerment of users in regard to ownership and sharing of data. The premise is that Web 3.0 will bring back the digital industry to a decentralized model, driving forwards, monetization, ownership, and governance, that was not as prominent through Web 2.0’s centralized model.

The basic principles of technological evolution dictate that when a new model is created, it replaces the previous version; that’s precisely what happened between Web 1.0 and Web 2.0. However, Web 3.0 will not follow the same pattern; instead, it has the potential to enhance instead of replace Web 2.0.

How Web3 Is Driving Social and Financial Empowerment

In recent years, Web3 has been put forward as the most significant democratic revolution in the digital space. With big tech monopolies governing the exchange and monetization of information today, the promise of Web3 is the empowerment of users when it comes to the ownership and sharing of data. The decentralization of ownership is expanding to industries beyond the web, too, particularly when it comes to use cases for blockchain technology, such as decentralized finance (DeFi), so Web3 is very much part of a general trend toward the democratization of platforms and services.  

What Is Web3? 

Web 1.0 was decentralized by default and was defined by static pages of information that were accessible to anyone with a PC and an internet connection. The browsing experience was not tailored to individuals, so the experience was the same for everyone. With Web 2.0, the experience evolved with the advent of server-side scripting (e.g., PHP) and self-generated pages, with individual users not only accessing information but also having content generated just for them. This is where advertising, social media, and the exchange of information took off in a big way. Suddenly, the model became centralized, with large technology firms monopolizing platforms, such as Meta (then Facebook), Google, and Amazon.

Model Cards and the Importance of Standardized Documentation for Explaining Models

Documentation is officially defined as material that provides official information or evidence that serves as a record. From a machine learning perspective, particularly with regard to a deployed model in a production environment, documentation should serve as notes and descriptions to help us understand the model in its entirety. Ultimately, effective documentation makes our models understandable to the many stakeholders we interact with. 

Whether we are deploying low-impact models serving basic and non-sensitive needs or high-impact models with significant outcomes, such as loan approvals, we have a responsibility to be transparent—not just to our end users, but to our internal stakeholders. Furthermore, it shouldn’t just be the data scientists who hold all knowledge underpinning a model—it should be open, accessible, and understandable to anyone. 

The Value of Machine Unlearning for Businesses, Fairness, and Freedom

Our work as data scientists is often focused on building predictive models. We work with vast quantities of data, and the more we have, the better our models and predictions can potentially become. When we have a high-performing model, we continue to retrain and iterate, introducing new data as required to keep our model fresh and free from degrading. The result is that the model’s performance level is largely retained and we, therefore, continue delivering value for users. 

But what happens if restrictions around a data set or individual data point are introduced? How then do we remove this information without compromising the model overall and without kicking off potentially intense retraining sessions? A potential answer that is gaining interest, and that we would like to explore, is machine unlearning. 

The Importance of Defining Fairness for Decision-Making AI Models

Defining fairness is a problematic task. The definition depends heavily on context and culture and when it comes to algorithms, every problem is unique so will be solved through the use of unique datasets. Algorithmic fairness can stem from statistical and mathematical definitions and even legal definitions of the problem at hand. Furthermore, if we build models based on different definitions of fairness for the same purpose, they will produce entirely different outcomes. 

The measure of fairness also changes with each use case. AI for credit scoring is entirely different from customer segmentation for marketing efforts, for example. In short, it’s tough to land on a catch-all definition, but for the purpose of this article, I thought I’d make the following attempt: An algorithm has fairness if it does not produce unfair outcomes for, or representations of, individuals or groups.

Why Fairer AI Is Essential For Long-Term Survival

Introduction

In my most recent post, I covered some areas that I hope to see evolve in the next year and beyond. How we can do more with data across industries is, of course, an important consideration for data scientists, businesses, and society as a whole, as better models lead to improved products and services. 

When machine learning models for cancer diagnoses show promise, we naturally rally around this positive step and rejoice in the vision of a brighter future because it’s a victory that touches us all in some way. But there are many other ways AI can and must be used for good in the world, and in my next few posts, I want to use a financial services example that affects all of us, to show how that can be achieved.