Chapter 10: Browser Wars

In June of 1995, representatives from Microsoft arrived at the Netscape offices. The stated goal was to find ways to work together—Netscape as the single dominant force in the browser market and Microsoft as a tech giant just beginning to consider the implications of the Internet. Both groups, however, were suspicious of ulterior motives.

Marc Andreessen was there. He was already something of a web celebrity. Newly appointed Netscape CEO James Barksdale also came. On the Microsoft side was a contingent of product managers and engineers hoping to push Microsoft into the Internet market.

The meeting began friendly enough, as the delegation from Microsoft shared what they were working on in the latest version of their operating system, Windows 95. Then, things began to sour.

According to accounts from Netscape, “Microsoft offered to make an investment in Netscape and give Netscape’s software developers crucial technical information about the Windows operating system if Netscape would agree not to make a browser for [the] Windows 95 operating system.” If that was to be believed, Microsoft would have tiptoed over the line of what is legal. The company would be threatening to use its monopoly to squash competition.

Andreessen, no stranger to dramatic flair, would later dress the meeting up with a nod to The Godfather in his deposition to the Department of Justice: “I expected to find a bloody computer monitor in my bed the next day.”

Microsoft claimed the meeting was a “setup,” initiated by Netscape to bait them into a comprising situation they could turn to their advantage later.

There are a few different places to mark the beginning of the browser wars. The release of Internet Explorer 1, for instance (late summer, 1995). Or the day Andreessen called out Microsoft as nothing but a “poorly debugged set of device drivers” (early 1995). But June 21, 1995—when Microsoft and Netscape came to a meeting as conceivable friends and left as bitter foes—may be the most definitive.


Andreessen called it “free, but not free.”

Here’s how it worked. When the Netscape browser was released it came with fee of $39 per copy. That was officially speaking. But fully function Netscape beta versions were free to download for their website. And universities and non-profits could easily get zero-cost licenses.

For the upstarts of the web revolution and open source tradition, Netscape was free enough. For the buttoned-up corporations buying in bulk with specific contractual needs, they could license the software for a reasonable fee. Free, but not free. “It looks free optically, but it is not,” a Netscape employee would later describe it. “Corporations have to pay for it. Maintenance has to be paid.”

“It’s basically a Microsoft lesson, right?” was how Andreessen framed it. “If you get ubiquity, you have a lot of options, a lot of ways to benefit from that.” If people didn’t have a way to get quick and easy access to Netscape, it would never spread. It was a lesson Andreessen had learned behind his computer terminal at the NCSA research lab at the University of Illinois. Just a year prior, he and his friends built the wildly successful, cross-platform Mosaic browser.

Andreessen worked on Mosaic for several years in the early ’90’s. But he began to feel cramped by increasing demands from higher-ups at NCSA hoping to capitalize on the browser’s success. At the end of 1993, Andreessen headed west, to stake his claim in Silicon Valley. That’s where he met James Clark.

Netscape Communications Corporation co-founders Jim Clark, left, and Marc Andreessen (AP Photo/HO)

Clark had just cut ties with Silicon Graphics, the company he created. A legend in the Bay Area, Clark was well known in the valley. When he saw the web for the first time, someone suggested he meet with Andreessen. So he did. The two hit it off immediately.

Clark—with his newly retired time and fortune—brought an inner circle of tech visionaries together for regular meetings. “For the invitees, it seemed like a wonderful opportunity to talk about ideas, technologies, strategies,” one account would later put it. “For Clark, it was the first step toward building a team of talented like-minded people who populate his new company.” Andreessen, still very much the emphatic and relentless advocate of the web, increasingly moved to the center of this circle.

The duo considered several ideas. None stuck. But they kept coming back to one. Building the world’s first commercial browser.

And so, on a snowy day in mid-April 1994, Andreessen and Clark took a flight out to Illinois. They were there with a single goal: Hire the members of the original Mosaic team still working at the NCSA lab for their new company. They went straight to the lobby of a hotel just outside the university. One by one, Clark met with five of the people who had helped create Mosaic (plus Lou Montulli, creator of Lynx and a student at University of Kansas) and offered them a job.

Right in a hotel room, Clark printed out contracts with lucrative salaries and stock options. Then he told them the mission of his new company. “Its mandate—Beat Mosaic!—was clear,” one employee recalled. By the time Andreessen and Clark flew back to California the next day, they’d have the six new employees of the soon-to-be-named Netscape.

Within six months they would release their first browser—Netscape Navigator. Six months after that, the easy-to-use, easy-to-install browser, would overrun the market and bring millions of users online for the first time.

Clark, speaking to the chaotic energy of the browser team and the speed at which they built software that changed the world, would later say Netscape gave “anarchy credibility.” Writer John Cassidy puts that into context. “Anarchy in the post-Netscape sense meant that a group of college kids could meet up with a rich eccentric, raise some money from a venture capitalist, and build a billion-dollar company in eighteen months,” adding, “Anarchy was capitalism as personal liberation.”


Inside of Microsoft were a few restless souls.

The Internet, and the web, was passing the tech giant by. Windows was the most popular operating system in the world—a virtual monopoly. But that didn’t mean they weren’t vulnerable.

As early as 1993, three employees at Microsoft—Steven Sinofsky, J. Allard, and Benjamin Slivka—began to sound the alarms. Their uphill battle to make Microsoft realize the promise of the Internet is documented in the “Inside Microsoft” profile penned by Kathy Rebell, which published in Bloomberg in 1996. “I dragged people into my office kicking and screaming,” Sinofsky told Rebello, “I got people excited about this stuff.”

Some employees believed Microsoft was distracted by a need to control the network. Investment poured into a proprietary network, like CompuServe or Prodigy, called the Microsoft Network (or MSN). Microsoft wanted to control the entire networked experience. But MSN would ultimately be a huge failure.

Slivka and Allard believed Microsoft was better positioned to build with the Internet rather than compete against it. “Microsoft needs to ensure that we ride the success of the Web, instead of getting drowned by it,” wrote Slivka in some of his internal communication.

Allard went a step further, drafting an internal memo named “Windows: The Next Killer Application for the Internet.” Allard’s approach, laid out in the document, would soon be the cornerstone of Microsoft’s Internet strategy. It consisted of three parts. First, embrace the open standards of the web. Second, extend its technology to the Microsoft ecosystem. Finally (and often forgotten), innovate and improve web technologies.

After a failed bid to acquire BookLink’s InternetWorks browser in 1994—AOL swooped in and outbid them—Microsoft finally got serious about the web. And their meeting with Netscape didn’t yield any results. Instead, they negotiated a deal with NCSA’s commercial partner Spyglass to license Mosaic for the first Microsoft browser.

In August of 1995, Microsoft released Internet Explorer version 1.0. It wasn’t very original, based on code that Spyglass had licensed to dozens of other partners. Shipped as part of an Internet Jumpstart add-on, the browser was bare-bones, clunkier and harder to use than what Netscape offered.

Source: Web Design Museum

On December 7th, Bill Gates hosted a large press conference on the anniversary of Pearl Harbor. He opened with news about the Microsoft Network, the star of the show. But he also demoed Internet Explorer, borrowing language directly from Allard’s proposal. “So the Internet, the competition will be kind of, once again, embrace and extend,” Gates announced, “And we will embrace all the popular Internet protocols… We will do some extensions to those things.”

Microsoft had entered the market.


Like many of her peers, Rosanne Siino began learning the world of personal computing on her own. After studying English in college—with an eye towards journalism—Siino found herself at a PR firm with clients like Dell and Seagate. Siino was naturally curious and resourceful, and read trade magazines and talked to engineers to learn what she could about personal computing in the information age.

She developed a special talent for taking the language and stories of engineers and translating them into bold visions of the future. Friendly, and always engaging, Siino built up a Rolodex of trade publication and general media contacts along the way.

After landing a job at Silicon Graphics, Siino worked closely with James Clark (he would later remark she was “one of the best PR managers at SGI”). She identified with Clark’s restlessness when he made plans to leave the company—an exit she helped coordinate—and decided if the opportunity came to join his new venture, she’d jump ship.

A few months later, she did. Siino was employee number 19 at Netscape; its first public relations hire.

When Siino arrived at the brand new Netscape offices in Mountain View, the first thing she did was sit down and talk to each one of the engineers. She wanted to hear—straight from the source—what the vision of Netscape was. She heard a few things. Netscape was building a “killer application,” one that would make other browsers irrelevant. They had code that was better, faster, and easier to use than anything out there.

Siino knew she couldn’t sell good code. But a young and hard working group of fresh-out-of-college transplants from rural America making a run at entrenched Silicon Valley; that was something she could sell. “We had this twenty-two-year-old kid who was pretty damn interesting and I thought, ‘There’s a story right there,'” she later said in an interview for the book Architects of the Web, “‘And we had this crew of kids who had come out from Illinois and I thought, ‘There’s a story there too.'”

Inside of Netscape, some executives and members of the board had been talking about an IPO. With Microsoft hot on their heels, and competitor Spyglass launching a successful IPO of their own, timing was critical. “Before very long, Microsoft was sure to attack the Web browser market in a more serious manner,” writer John Cassidy explains, “If Netscape was going to issue stock, it made sense to do so while the competition was sparse.” Not to mention, a big, flashy IPO was just what the company needed to make headlines all around the country.

In the months leading up to the IPO, Siino crafted a calculated image of Andreeseen for the press. She positioned him as a leader of the software generation, an answer to the now-stodgy, Silicon-driven hardware generation of the 60’s and 70’s. In interviews and profiles, Siino made sure Andreeseen came off as a whip-smart visionary ready to tear down the old ways of doing things; the “new Bill Gates.”

That required a fair bit of cooperation from Andreeseen. “My other real challenge was to build up Marc as a persona,” she would later say. Sometimes, Andreessen would complain about the interviews, “but I’d be like, ‘Look, we really need to do this.’ And he’s savvy in that way. He caught on.'” Soon, it was almost natural, and as Andreeseen traveled around with CEO James Barksdale to talk to potential investors ahead of their IPO, Netscape hype continued to inflate.

August 9, 1995, was the day of the Netscape IPO. Employees buzzed around the Mountain View offices, too nervous to watch the financial news beaming from their screens or the TV. “It was like saying don’t notice the pink elephant dancing in your living room,” [Siino said later]. They shouldn’t have worried. In its first day of trading, the Netscape stock price rose 108%. It was best opening day for a stock on Wall Street. Some of the founding employees went to bed that night millionaires.

Not long after, Netscape released version 2 of their browser. It was their most ambitious release to date. Bundled in the software were tools for checking email, talking with friends, and writing documents. It was sleek and fast. The Netscape homepage that booted up each time the software started sported all sorts of nifty and well-known web adventures.

Not to mention JavaScript. Netscape 2 was the first version to ship with Java applets, small applications run directly in the browser. With Java, Netscape aimed to compete directly with Microsoft and their operating system.

To accompany the release, Netscape recruited young programmer Brendan Eich to work on a scripting language that riffed on Java. The result was JavaScript. Ecih created the first version in 10 days as a way for developers to make pages more interactive and dynamic. It was primitive, but easy to grasp, and powerful. Since then, it has become one of the most popular programming languages in the world.

Microsoft wasn’t far behind. But Netscape felt confident. They had pulled off the most ambitious product the web had ever seen. “In a fight between a bear and an alligator, what determines the victor is the terrain,” Andreessen said in an interview from the early days of Netscape. “What Microsoft just did was move into our terrain”


There’s an old adage at Microsoft, that it never gets something right until version 3.0. It was true even of their flagship product, Windows, and has notoriously been true of its most famous applications.

The first version of Internet Explorer was a rushed port of the Mosaic code that acted as little more than a a public statement that Microsoft was going into the browser business. The second version, released just after Netscape’s IPO in late 1995, saw rapid iteration, but lagged far behind. With Internet Explorer 3, Microsoft began to get the browser right.

Microsoft’s big, showy press conference hyped Internet Explorer as a true market challenger. Behind the scenes, it operated more like a skunkworks experiment. Six people were on the original product team. In a company of tens of thousands. “A bit like the original Mac team, the IE team felt like the vanguard of Microsoft,” one-time Internet Explorer lead Brad Silverberg would later say, “the vanguard of the industry, fighting for its life.”

That changed quickly. Once Microsoft recognized the potential of the web, they shifted their weight to it. In Speeding the Net, a comprehensive account of the rise of Netscape and its fall at the hands of Microsoft, authors Josh Quittner and Michelle Slatall, describe the Microsoft strategy. “In a way, the quality of it didn’t really matter. If the first generation flopped, Gates could assign a team of his best and brightest programmers to write an improved model. If that one failed too, he could hire even better programmers and try again. And again. And again. He had nearly unlimited resources.”

By version 3, the Internet Explorer team had a hundred people on it (including Chris Wilson of the original NCSA Mosaic team). That number would reach the thousands in a few short years. The software rapidly closed the gap. Internet Explorer introduced features that had given Netscape an edge—and even introduced their own HTML extensions, dynamic animation tools for developers, and rudimentary support of CSS.

In the summer of 1996, Walt Mossberg talked up Microsoft’s browsers. Only months prior he had labeled Netscape Navigator the “clear victor.” But he was beginning to change his mind. “I give the edge, however, to Internet Explorer 3.0,” he wrote upon Microsoft’s version 3. “It’s a better browser than Navigator 3.0 because it is easier to use and has a cleaner, more flexible user interface.”

Microsoft Internet Explorer 3.0.01152
Netscape Navigator 3.04

Still, most Microsoft executives knew that competing on features would never be enough. In December of 1996, senior VP James Allchin emailed his boss, Paul Maritz. He laid out the current strategy, an endless chase after Netscape’s feature set. “I don’t understand how IE is going to win,” Allchin conceded, “My conclusion is that we must leverage Windows more.” In the same email, he added, “We should think first about an integrated solution — that is our strength.” Microsoft was not about to simply lie down and allow themselves to be beaten. They focused on two things: integration with Windows and wider distribution.

When it was released, Internet Explorer 4 was more tightly integrated with the operating system than any previous version; an almost inseparable part of the Windows package. It could be used to browse files and folders. Its “push” technology let you stream the web, even when you weren’t actively using the software. It used internal APIs that were unavailable to outside developers to make the browser faster, smoother, and readily available.

And then there was distribution. Days after Netscape and AOL shook on a deal to include their browser on the AOL platform, AOL abruptly changed their mind and when with Internet Explorer instead. It would later be revealed that Microsoft had made them, as one writer put it (extending The Godfather metaphor once more), an “offer they couldn’t refuse.” Microsoft had dropped their prices down to the floor and—more importantly—promised AOL precious real estate pre-loaded on the desktop of every copy of the next Windows release.

Microsoft fired their second salvo with Compaq. Up to that point, all Compaq computers had shipped with Netscape pre-installed on Windows. When Windows threatened to suspend their license to use Windows at all (which was revealed later in court documents), that changed to Internet Explorer too.

By the time Windows ’98 was released, Internet Explorer 4 came already installed, free for every user, and impossible to remove.


“Mozilla!” interjected Jamie Zawinski. He was in a meeting at the time, which now rang in deafening silence for just a moment. Heads turned. Then, they kept going.

This was early days at Netscape. A few employees from engineering and marketing huddled together to try to come up with a name for the thing. One employee suggested they were going to crush Mosaic, like a bug. Zawinski—with a dry, biting humor he was well known for—thought Mozilla, “as in Mosaic meets Godzilla.”

Eventually, marketer Greg Sands settled on Netscape. But around the office, the browser was, from then on, nicknamed Mozilla. Early marketing materials on the web even featured a Mozilla inspired mascot, a green lizard with a know-it-all smirk, before they shelved it for something more professional.

Credit: Dave Titus

Credit: Dave Titus
Credit: Dave Titus

It would be years before the name would come back in any public way; and Zawinski would have a hand in that too.

Zawinski had been with Netscape since almost the beginning. He was employee number 20, brought in right after Rosanne Siino, to replace the work that Andreessen had done at NCSA working on the flagship version of Netscape for X-Windows. By the time he joined, he already had something of a reputation for solving complex technical challenges.

Jaime Zawinski

Zawinski’s earliest memories of programming date back to eighth grade. In high school, he was a terrible student. But he still managed to get a job after school as a programmer, working on the one thing that managed to keep him interested: code. After that, he started work for the startup Lucid, Inc., which boasted a strong pedigree of programming legends at its helm. Zawinski worked on the Common Lisp programming language and the popular IDE Emacs; technologies revered in the still small programming community. By virtue of his work on the projects, Zawinski had instant credibility among the tech elite.

At Netscape, the engineering team was central to the way things worked. It was why Siino had chosen to meet with members of that team as soon as she began, and why she crafted the story of Netscape around the way they operated. The result was a high-pressure, high-intensity atmosphere so indispensable company that it would become party of the companies mythology. They moved so quickly that many began to call such a rapid pace of development “Netscape Time.”

“It was really a great environment. I really enjoyed it,” Zawinski would later recall. “Because everyone was so sure they were right, we fought constantly but it allowed us to communicate fast.” But tempers did flare (one article details a time when he threw a chair against the wall and left abruptly for two weeks after his computer crashed), and many engineers would later reflect on the toxic workplace. Zawinski once put it simply: “It wasn’t healthy.”

Still, engineers had a lot of sway at the organization. Many of them, Zawinski included, were advocates of free software. “I guess you can say I’ve been doing free software since I’ve been doing software,” he would later say in an interview. For Zawinski, software was meant to be free. From his earliest days on the Netscape project, he advocated for a more free version of the browser. He and others on the engineering team were at least partly responsible for the creative licensing that went into the company’s “free, but not free” business model.

In 1997, technical manager Frank Hecker breathed new life into the free software paradigm. He wrote a 30-page whitepaper proposing what several engineers had wanted for years—to release the entire source of the browser for free. “The key point I tried to make in the document,” Hecker asserted, “was that in order to compete effectively Netscape needed more people and companies working with Netscape and invested in Netscape’s success.”

With the help of CTO Eric Hahn, Hecker and Zawinski made their case all the way to the top. By the time they got in the room with James Barksdale, most of the company had already come around to the idea. Much to everyone’s surprise, Barksdale agreed.

On January 23, 1998, Netscape made two announcements. The first everyone expected. Netscape had been struggling to compete with Microsoft for nearly a year. The most recent release of Internet Explorer version 4, bundled directly into the Windows operating system for free, was capturing ever larger portions of their market share. So Netscape announced it would be giving its browser away for free too.

The next announcement came as a shock. Netscape was going open source. The browser’s entire source code—millions of lines of code—would be released to the public and open to contributions from anybody in the world. Led by Netscape veterans like Michael Toy, Tara Hernandez, Scott Collins, and Jamie Zawinski, the team would have three months to excise the code base and get it ready for public distribution. The effort had a name too: Mozilla.

Firefox 1.0 (Credit: Web Design Museum)

On the surface, Netscape looked calm and poised to take on Microsoft with the force of the open source community at their wings. Inside the company, things looked much different. The three months that followed were filled with frenetic energy, close calls, and unparalleled pace. Recapturing the spirit of the earliest days of innovation at Netscape, engineers worked frantically to patch bugs and get the code ready to be released to the world. In the end, they did it, but only by the skin of their teeth.

In the process, the project spun out into an independent organization under the domain Mozilla.org. It was staffed entirely by Netscape engineers, but Mozilla was not technically a part of Netscape. When Mozilla held a launch party in April of 1998, just months after their public announcement, it didn’t just have Netscape members in attendance.

Zawinski had organized the party, and he insisted that a now growing community of people outside the company who had contributed to the project be a part of it. “We’re giving away the code. We’re sharing responsibility for development of our flagship product with the whole net, so we should invite them to the party as well,” he said, adding, “It’s a new world.”


On the day of his testimony in November of 1998, Steve McGeady sat, as one writer described, “motionless in the witness box.” He had been waiting for this moment for a long time; the moment when he could finally reveal, in his view, the nefarious and monopolist strain that coursed through Microsoft.

The Department of Justice had several key witnesses in their antitrust case against Microsoft, but McGeady was a linchpin. As Vice President at Intel, McGeady had regular dealings with Microsoft; and his company stood outside of the Netscape and Microsoft conflict. There was an extra layer of tension to his particular testimony though. “The drama was heightened immeasurably by one stark reality,” noted in one journalist’s accounting of the trial, “nobody—literally, nobody—knew what McGeady was going to say.”

When he got his chance to speak, McGeady testified that high-ranking Microsoft executives had told him that their goal was to “cut off Netscape’s air supply.” Using their monopoly position in the operating system market, Microsoft threatened computer manufacturers—many of whom Intel had regular dealings—to ship their computers with Internet Explorer or face having their Windows licenses revoked entirely.

Drawing on the language Bill Gates used in his announcement of Internet Explorer, McGeady claimed that one executive had laid out their strategy: “embrace, extend and extinguish.” According to his allegations, Microsoft never intended to enter into a competition with Netscape. They were ready to use every aggressive tactic and walk the line of legality to crush them. It was a major turning point for the case and a massive win for the DOJ.

The case against Microsoft, however, had begun years earlier, when Netscape retained a team from the antitrust law firm Sonsini Goodrich & Rosati in the summer of 1995. The legal team included outspoken anti-Microsoft crusader Gary Reback, as well as Susan Creighton. Reback would be the most public member of the firm in the coming half-decade, but it would be Creighton’s contributions that would ultimately turn the attention of the DOJ. Creighton began her career as a clerk for Supreme Court Justice Sandra Day O’Conner. She quickly developed a reputation for precision and thoroughness. Her patterned, deliberate and methodical approach made her a perfect fit for a full and complete breakdown of Microsoft’s anti-competitive strategy.

Susan Creighton (Credit: Wilson Sonsini Goorich & Rosati)

Creighton’s work with Netscape led her to write a two-hundred and twenty-two page document detailing the anti-competitive practices of Microsoft. She laid out her case plain, and simply. “It is about a monopolist (Microsoft) that has maintained its monopoly (desktop operating systems) for more than ten years. That monopoly is threatened by the introduction of a new technology (Web software)…”

The document was originally planned as a book, but Netscape feared that if the public knew just how much danger they were in from Microsoft, their stock price would plummet. Instead, Creighton and Netscape handed it off the Department of Justice.

Inside the DOJ, it would trigger a renewed interest in ongoing antitrust investigations of Microsoft. Years of subpoenaing, information gathering, and lengthy depositions would follow. After almost three years, in May of 1998, the Department of Justice and 20 state attorneys filed an antitrust suit against Microsoft, a company which had only just then crossed over a 50% share of the browser market.

“No firm should be permitted to use its monopoly power to develop a chokehold on the browser software needed to access the Internet,” announced Janet Reno—the prosecuting attorney general under President Clinton—when charges were brought against Microsoft.

At the center of the trial was not necessarily the stranglehold Microsoft had on the software of personal computers—not technically an illegal practice. It was the way they used their monopoly to directly counter competition in other markets. For instance, the practice threatening to revoke licenses to manufacturers that packaged computers with Netscape. Netscape’s account of the June 1995 meeting factored in as well (when Andreessen was asked why he had taken such detailed notes on the meeting, he replied “I thought that it might be a topic of discussion at some point with the US government on antitrust issues.”)

Throughout the trial, both publicly and privately, Microsoft reacted to scrutiny poorly. They insisted that they were right; that they were doing what was best for the customers. In interviews and depositions, Bill Gates would often come off as curt and dismissive, unable or unwilling to yield to any cessation of power. The company insisted that the browser and operating system were co-existent, one could not live without the other—a fact handily refuted by the judge when he noted that he had managed to uninstall Internet Explorer from Windows in “less than 90 seconds.” The trial became a national sensation as tech enthusiasts and news junkies waited with bated breath for each new revelation.

Microsoft President Bill Gates, left, testifies on Capitol Hill, and Tuesday, March 3, 1998. (Credit: Ken Cedeno/AP file photo)

In November of 1999, the presiding judge issued his ruling. Microsoft had, in fact, used its monopoly power and violated antitrust laws. That was followed in the summer of 2000 by a proposed remedy: Microsoft was to be broken up into two separate companies, one to handle its operating software, and the other its applications. “When Microsoft has to compete by innovating rather than reaching for its crutch of the monopoly, it will innovate more; it will have to innovate more. And the others will be free to innovate,” Iowa State Attorney General Tom Miller said after the judge’s ruling was announced.

That never happened. An appeal in 2002 resulted in a reversal of the ruling and the Department of Justice agreed to a lighter consent decree. By then, Internet Explorer’s market share stood at around 90%. The browser wars were, effectively, over.


“Are you looking for an alternative to Netscape and Microsoft Explorer? Do you like the idea of having an MDI user interface and being able to browse in multiple windows?… Is your browser slow? Try Opera.”

That short message announced Opera to the world for the first time in April of 1995, posted by the browser’s creators to a Usenet forum about Windows. The tone of the message—technically meticulous, a little pointed, yet genuinely idealistic—reflected the philosophy of Opera’s creators, Jon Stephenson von Tetzchner and Geir Ivarsøy. Opera, they claimed, was well-aligned with the ideology of the web.

Opera began as a project run out of the Norwegian telecommunications firm Telnor. Once it became stable, von Tetzchner and Ivarsøy rented space at Telnor to spin it out into an independent company. Not long after, they posted that announcement and released the first version of the Opera web browser.

The team at Opera was small, but focused and effective, loyal to the open web. “Browsers are in our blood,” Tetzchner would later say. Time and time again, the Opera team would prove that. They were staffed by the web’s true believers, and have often prided themselves on leading the development of web standards and an accessible web.

In the mid-to-late 90’s, Geir Ivarsøy was the first person to implement the CSS standard in any browser, in Opera 3.5. That would prove more than enough to convince the creator of CSS, Håkon Wium Lie, to join the company as CTO. Ian Hickson worked at Opera during the time he developed the CSS Acid Test at the W3C.

The original CSS Acid Test (Credit: Eric Meyer)

The company began developing a version of their browser for low-powered mobile devices in developing nations as early as 1998. They have often tried to push the entire web community towards web standards, leading when possible by example.

Years after the antitrust lawsuit of Microsoft, and resulting reversal in the appeal, Opera would find themselves embroiled in a conflict on a different front of the browser wars.

In 2007, Opera filed a complaint with the European Commission. Much like the case made by Creighton and Netscape, Opera alleged that Microsoft was abusing its monopoly position by bundling new versions of Internet Explorer with Windows 7. The EU had begun to look into allegations against Microsoft almost as soon as the Department of Justice, but the Opera complaint added a substantial and recent area of inquiry. Opera claimed that Microsoft was limiting user choice by making opaque additional browser options. “You could add more browsers, to give consumers a real choice between browsers, you put them in front of their eyeballs,” Lie said at the time of the complaint.

In Opera’s summary of their complaints they evoked in themselves the picture of a free and open web. Opera, they argued, were advocates of a web as the web was intended—accessible, universal, and egalitarian. Once again citing the language of “embrace, extend, and extinguish,” the company also called out Microsoft for trying to take control over the web standards process. “The complaint calls on Microsoft to adhere to its own public pronouncements to support these standards, instead of stifling them with its notorious ‘Embrace, Extend and Extinguish’ strategy,” it read.

The browser “ballot box“ (Credit: Ars Technica)

In 2010, the European Commission issued a ruling, forcing Microsoft to show a so-called “ballot box” to European users of Windows—a website users could see the first time they accessed the Internet that listed twelve alternative browsers to download, including Opera and Mozilla. Microsoft included this website in their European Windows installs for five years, until their obligation lapsed.


Netscape Navigator 5 never shipped. It echoes, unreleased, in the halls of software’s most public and recognized vaporware.

After Netscape open-sourced their browser as part of the Mozilla project, the focus of the company split. Between being acquired by AOL and continuing pressure from Microsoft, Netscape was on its last legs. The public trial of Microsoft brought some respite, but too little, too late. “It’s one of the great ironies here,” Netscape lawyer Gary Reback would later say, “after years of effort to get the government to do something, by [1998] Netscape’s body is already in the morgue.” Meanwhile, management inside of Netscape couldn’t decide how best to integrate with the Mozilla team. Rather than work alongside the open-source project, they continued to maintain a version of Netscape separate and apart from the public project.

In October of 1998, Brendan Eich—who was part of the core Mozilla team—published a post to the Mozilla blog. “It’s time to stop banging our heads on the old layout and FE codebase,” he wrote. “We’ve pulled more useful miles out of those vehicles than anyone rightly expected. We now have a great new layout engine that can view hundreds of top websites.”

Many Mozilla contributors agreed with the sentiment, but the rewrite Eich proposed would spell the project’s initial downfall. While Mozilla tinkered away on a new rendering engine for the browser—which would soon be known as Gecko—Netscape scrapped its planned version 5.

Progress ground to a halt. Zawinski, one of the Mozilla team members opposed to the rewrite, would later describe his frustration when he resigned from Netscape in 1999. “It constituted an almost-total rewrite of the browser, throwing us back six to 10 months. Now we had to rewrite the entire user interface from scratch before anyone could even browse the Web, or add a bookmark.” Scott Collins, one of the original Netscape programmers, would put it less diplomatically: “You can’t put 50 pounds of crap in a ten pound bag, it took two years. And we didn’t get out a 5.0, and that cost us everything, it was the biggest mistake ever.”

The result was a world-class browser with great standards support and a fast-running browser engine. But it wasn’t ready until April of 2000, when Netscape 6 was finally released. By then, Microsoft had eclipsed Netscape, owning 80% of the browser market. It would never be enough to take back a significant portion of that browser share.

“I really think the browser wars are over,” said one IT exec after the release of Netscape 6. He was right. Netscape would sputter out for years. As for Mozilla, that would soon be reborn as something else entirely.


The post Chapter 10: Browser Wars appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Chapter 9: Community

In April of 2009, Yahoo! shut down GeoCities. Practically overnight, the once beloved service had its signup page replaced with a vague message announcing its closure.

We have decided to discontinue the process of allowing new customers to sign up for GeoCities accounts as we focus on helping our customers explore and build new relationships online in other ways. We will be closing GeoCities later this year.

Existing GeoCities accounts have not changed. You can continue to enjoy your web site and GeoCities services until later this year. You don’t need to change a thing right now — we just wanted you to let you know about the closure as soon as possible. We’ll provide more details about closing GeoCities and how to save your site data this summer, and we will update the help center with more details at that time.

In the coming months, the company would offer little more detail than that. Within a year, user homepages built with GeoCities would blink out of existence, one by one, until they were all gone.

Reactions to the news ranged from outrage to contemptful good riddance. In general, however, the web lamented about a great loss. Former GeoCities users recalled the sites that they built using the service, often hidden from public view, and often while they were very young.

For programmer and archivist Jason Scott, nostalgic remembrances did not go far enough. He had only recently created the Archive Team, a rogue group of Internet archivists willing to lend their compute cycles to the rescue of soon departed websites. The Archive Team monitors sites on the web marked for closure. If they find one, they run scripts on their computers to download as much of the site as they could before it disappears.

Scott did not think the question of whether or not GeoCities deserved to exist was relevant. “Please recall, if you will, that for hundreds of thousands of people, this was their first website,” he posted to his website not long after Yahoo!‘s announcement. “[Y]ou could walk up to any internet-connected user, hand them the URL, and know they would be able to see your stuff. In full color.” GeoCities wasn‘t simply a service. It wasn’t just some website. It was burst of creative energy that surged from the web.

In the weeks and months that followed, the Archive Team set to work downloading as many GeoCities sites as they could. They would end up with millions in their archive before Yahoo! pulled the plug.

Chris Wilson recalled the promise of an early web in a talk looking back on his storied career with Mosaic, then Internet Explorer, and later Google Chrome. The first web browser, developed by Sir Tim Berners-Lee, included the ability for users to create their own websites. As Wilson remembers it, that was the de-facto assumption about the web—that it would be a participatory medium.

“Everyone can be an author. Everyone would generate content,” Wilson said, “We had the idea that web server software should be free and everyone would run a server on their machine.” His work on Mosaic included features well ahead of their time, like built-in annotations so that users could collaborate and share thoughts on web documents together. They built server software in the hopes that groups of friends would cluster around common servers. By the time Netscape skyrocketed to popularity, however, all of those features had faded away.

GeoCities represented the last remaining bastion of this original promise of the web. Closing the service down, abruptly and without cause, was a betrayal of that promise. For some, it was the writing on the wall: the web of tomorrow was to look nothing like the web of yesterday.


In a story he recalls frequently, David Bohnett learned about the web on an airplane. Tens of thousands of feet up, untethered from any Internet network, he first saw mention of the web in a magazine. Soon thereafter, he fell in love.

Bohnett is a naturally empathetic individual. The long arc of his career so far has centered on bringing people together, both as a technologist and as a committed activist. As a graduate student, he worked as a counselor answering calls on a crisis hotline and became involved in the gay rights movement at his school. In more recent years, Bohnett has devoted his life to philanthropy.

Finding connection through compassion has been a driving force for Bohnett for a long time. At a young age, he recognized the potential of technology to help him reach others. “I was a ham radio operator in high school. It was exciting to collect postcards from people you talked to around the world,” he would later say in an interview. “[T]hat is a lot of what the Web is about.‘’

Some of the earliest websites brought together radical subcultures and common interests. People felt around in the dark of cyberspace until they found something they liked.

Riding a wave of riot grrrl ephemera in the early 1990’s, ChickClick was an early example. Featuring a mix of articles and message boards, women and young girls used ChickClick as a place to gather and swap stories from their own experience.

Much of the site centered on its strident creators, sisters Heather and Heidi Swanson. Though they each had their own areas of responsibility—Heidi provided the text and the editorial, Heather acted as the community liaison—both were integral parts of the community they created. ChickClick would not exist without the Swanson sisters. They anchored the site to their own personalities and let it expand through like-minded individuals.

Eventually, ChickClick grew into a network of linked sites, each focused on a narrower demographic; an interconnected universe of women on the web. The cost to expanding was virtually zero, just a few more bytes zipping around the Internet. ChickClick’s greatest innovation came when they offered their users their own homepages. Using a rudimentary website builder, visitors could create their own space on the web, for free and hosted by ChickClick. Readers were suddenly transformed into direct participants in the universe they had grown to love.

Bohnett would arrive at a similar idea not long after. After a brief detour running a more conventional web services agency called Beverley Hills Internet, Bohnett and his business partner John Rezner tried something new. In 1994, Bohnett sent around an email to some friends inviting them to create a free homepage (up to 15MB) on their experimental service. The project was called GeoCities.

What made GeoCities instantly iconic was that it reached for a familiar metaphor in its interface. When users created an account for the first time they had to pick an actual physical location on a virtual map—the digital “address” of their website. “This is the next wave of the net—not just information but habitation,” Bohnett would say in a press release announcing the project. Carving out a real space in cyberspace would become a trademark of the GeoCities experience. For many new users of the web, it made the confusing world of the web feel lived in and real.

The GeoCities map was broken up into a handful of neighborhoods users could join. Each neighborhood had a theme, though there wasn‘t much rhyme or reason to what they were called. Some were based on real world locations, like Beverley Hills for fashion aficionados or Broadway for theater nerds. Others simply played to a theme, like Area51 for the sci-fi crowd or Heartland for parents and families. Themes weren’t enforced, and most were later dropped in everything but name.

Credit: One Terabyte of Kilobyte Age

Neighborhoods were limited to 10,000 people. When that number was reached, the neighborhood expanded into suburbs. Everywhere you went on GeoCities there was a tether to real, physical spaces.

Like any real-world community, no two neighborhoods were the same. And while some people weeded their digital gardens and tended to their homepages, others left their spaces abandoned and bare, gone almost as soon as they arrived. But a core group of people often gathered in their neighborhoods around common interests and established a set of ground rules.

Historian Ian Milligan has done extensive research on the mechanics and history of GeoCities. In his digital excavation, he discovered a rich network of GeoCities users who worked hard to keep their neighborhoods orderly and constructive. Some neighborhoods assigned users as community liaisons, something akin to a dorm room RA, or neighborhood watch. Neighbors were asked to (voluntarily) follow a set of rules. Select members acted as resources, reaching out to others to teach them how to build better homepages. “These methods, grounded in the rhetoric of both place and community,” Milligan argues, “helped make the web accessible to tens of millions of users.”

For a large majority of users, however, GeoCities was simply a place to experiment, not a formal community. GeoCities would eventually become one of the web’s most popular destinations. As more amateurs poured in, it would become known for a certain garish aesthetic, pixelated GIFs of construction workers, or bright text on bright backgrounds. People used their homepages to host their photo albums, or make celebrity fan sites, or to write about what they had for lunch. The content of GeoCities was as varied as the entirety of human experience. And it became the grounding for a lot of what came next.

“So was it community?” Black Planet founder Omar Wasow would later ask. “[I]t was community in the sense that it was user-generated content; it was self-expression.” Self-expression is a powerful ideal, and one that GeoCities proved can bring people together.


Many early communities, GeoCities in particular, offered a charming familiarity in real world connection. Other sites flipped the script entirely to create bizarre and imaginative worlds.

Neopets began as an experiment by students Donna Williams and Adam Powell in 1999. Its first version—a prototype that mixed Williams art and Powell’s tech—had many of the characteristics that would one day make it wildly popular. Users could collect and raise fictional virtual pets inside the fictional universe of Neopia. It operated like the popular handheld toy Tamagotchi, but multiplied and remixed for cyberspace.

Beyond a loose set of guidelines, there were no concrete objectives. No way to “win” the game. There were only the pets, and pet owners. Owners could create their own profiles, which let them display an ever expanding roster of new pets. Pulled from their imagination, Williams and Powell infused the site with their own personality. They created “unique characters,” as Williams later would describe it, “something fantasy-based that could live in this weird, wonderful world.”

As the site grew, the universe inside it did as well. Neopoints could be earned through online games, not as much a formal objective as much as in-world currency. They could be spent on accessories or trinkets to exhibit on profiles, or be traded in the Neopian stock market (a fully operational simulation of the real one), or used to buy pets at auction. The tens and thousands of users that soon flocked to the site created an entirely new world, mapped on top of of a digital one.

Like many community creators, Williams and Powell were fiercely protective of what they had built, and the people that used it. They worked hard to create an online environment that was safe and free from cheaters, scammers, and malevolent influence. Those who were found breaking the rules were kicked out. As a result, a younger audience, and one that was mostly young girls, were able to find their place inside of Neopia.

Neopians—as Neopets owners would often call themselves—rewarded the effort of Powell and Williams by enriching the world however they could. Together, and without any real plan, the users of Neopets crafted a vast community teeming with activity and with its own set of legal and normative standards. The trade market flourished. Users traded tips on customizing profiles, or worked together to find Easter eggs hidden throughout the site. One of the more dramatic examples of users taking ownership of the site was The Neopian Times, an entirely user-run in-universe newspaper documenting the fictional going-ons of Neopia. Its editorial has spanned decades, and continues to this day.

Though an outside observer might find the actions of Neopets frivolous, they were a serious endeavor undertaken by the site’s most devoted fans. It became a place for early web adventurers, mostly young girls and boys, to experience a version of the web that was fun, and predicated on an idea of user participation. Using a bit of code, Neopians could customize their profile to add graphics, colors, and personality to it. “Neopets made coding applicable and personal to people (like me),” said one former user, “who otherwise thought coding was a very impersonal activity.” Many Neopets coders went on to make that their careers.

Neopets was fun and interesting and limited only by the creativity of its users. It was what many imagined a version of the web would look like.

The site eventually languished under its own ambition. After it was purchased and run by Doug Dohring and later, Viacom, it set its sights on a multimedia franchise. “I never thought we could be bigger than Disney,” Dohring once said in a profile in Wired, revealing just how far that ambition went, “but if we could create something like Disney – that would be phenomenal.” As the site began to lean harder into somewhat deceptive advertising practices and emphasize expansion into different mediums (TV, games, etc.), Neopets began to overreach. Unable to keep pace with the rapid developments of the web, it has been sold to a number of different owners. The site is still intact, and thanks to its users, thriving to this day.


Candice Carpenter thought a village was a handy metaphor for an online community. Her business partner, and co-founder, Nancy Evans suggested adding an “i” to it, for interactive. Within a few years, iVillage would rise to the highest peak of Internet fortunes and hype. Carpenter would cultivate a reputation for being charismatic, fearless, and often divisive, a central figure in the pantheon of dot-com mythology. Her meteoric rise, however, began with a simple idea.

By the mid-90’s, community was a bundled, repeatable, commotized product (or to some, a “totally overused buzzword,” as Omar Wasow would later put it). Search portals like Yahoo! and Excite were popular, but their utility came from bouncing visitors off to other destinations. Online communities had a certain stickiness, as one one profile in The New Yorker put it, “the intangible quality that brings individuals to a Web site and holds them for long sessions.”

That unique quality attracted advertisers hoping to monetize the attention of a growing base of users. Waves of investment in community, whatever that meant at any given moment, followed. “The lesson was that users in an online community were perfectly capable of producing value all by themselves,” Internet historian Brian McCullough describes. The New Yorker piece framed it differently. “Audience was real estate, and whoever secured the most real estate first was bound to win.”

TheGlobe.com was set against the backdrop of this grand drama. Its rapid and spectacular rise to prominence and fall from grace is well documented. The site itself was a series of chat rooms organized by topic, created by recent Cornell alumni Stephan Paternot and Todd Krizelman. It offered a fresh take on standard chat rooms, enabling personalization and fun in-site tools.

Backed by the notoriously aggressive Wall Street investment bank Bear Stearns, and run by green, youngish recent college grads, theGlobe rose to a heavily inflated valuation in full public view. “We launched nationwide—on cable channels, MTV, networks, the whole nine yards,” Paternot recalls in his book about his experience, “We were the first online community to do any type of advertising and fourth or the fifth site to launch a TV ad campaign.” Its collapse would be just as precipitous; and just as public. The site’s founders would be on the covers of magazines and the talk of late night television shows as examples of dot-com glut, with just a hint of schadenfreude.

So too does iVillage get tucked into the annals of dot-com history. The site‘s often controversial founders were frequent features in magazine profiles and television interviews. Carpenter attracted media attention as deftly as she maneuvered her business through rounds of investment and a colossally successful IPO. Its culture was well-known in the press for being chaotic, resulting in a high rate of turnover that saw the company go through five Chief Financial Officer’s in four years.

And yet this ignores the community that iVillage managed to build. It began as a collection of different sites, each with a mix of message boards and editorial content centered around a certain topic. The first, a community for parents known as Parent Soup which began at AOL, was their flagship property. Before long, it spanned across sixteen interconnected websites. “iVillage was built on a community model,” writer Claire Evans describes in her book Broad Band, “its marquee product was forums, where women shared everything from postpartum anxiety and breast cancer stories to advice for managing work stress and unruly teenage children.”

Candice Carpenter (left) and Nancy Evans (right). Image credit: The New Yorker
Candice Carpenter (left) and Nancy Evans (right).
Image credit: The New Yorker

Carpenter had a bold and clear vision when she began, a product that had been brewing for years. After growing tired of the slow pace of growth in positions at American Express and QVC, Carpetner was given more free rein consulting for AOL. It was her first experience with an online world. There wasn‘t a lot that impressed her about AOL, but she liked the way people gathered together in groups. “Things about people‘s lives that were just vibrant,” she’d later remark in an interview, “that’s what I felt the Internet would be.”

Parent Soup began as a single channel on AOL, but it soon moved to the web along with similar sites for different topics and interests—careers, dating, health and more. What drew people to iVillage sites was their authenticity, their ability to center conversations around topics and bring together people that were passionate about spreading advice. The site was co-founded by Nancy Evans, who had years of experience as an editor in the media industry. Together, they resisted the urge to control every aspect of their community. “The emphasis is more on what visitors to the site can contribute on the particulars of parenthood, relationships and workplace issues,” one writer noted, “rather than on top-tier columnists spouting advice and other more traditional editorial offerings used by established media companies.”

There was, however, something that bound all of the site‘s together: a focus that made iVillage startlingly consistent and popular. Carpenter would later put it concisely: “the vision is to help women in their lives with the stuff big and small that they need to get through.” Even as the site expanded to millions of users, and positioned itself as a network specifically for women, and went through one of the largest IPO’s in the tech industry, that simple fact would remain true.

What’s forgotten in the history of dot-com community is the community. There were, of course, lavish stories of instant millionaires and unbounded ambition. But much of the content that was created was generated by people, people that found each other across vast distances among a shared understanding. The lasting connections that became possible through these communities would outlast the boom and bust cycle of Internet business. Sites like iVillage became benchmarks for later social experiments to aspire to.


In February of 2002, Edgar Enyedy an active contributor to a still new Spanish version of Wikipedia posted to the Wikipedia mailing list and to Wikipedia‘s founder, Jimmy Wales. “I’ve left the project,” he announced, “Good luck with your wikiPAIDia [sic].”

As Wikipedia grew in the years after it officially launched in 2001, it began to expand to other countries. As it did, each community took on its own tenor and tone, adapting the online encyclopedia to the needs of each locale. “The organisation of topics, for example,” Enyedy would later explain, “is not the same across languages, cultures and education systems. Historiography is also obviously not the same.”

Enyedy‘s abrupt exit from the project, and his callous message, was prompted by a post from Wikipedia’s first editor-in-chief Larry Sanger. Sanger had been instrumental in the creation of Wikipedia, but he had recently been asked to step back as a paid employee due to lack of funds. Sanger suggested that sometime in the near future, Wikpedia may turn to ads.

It was more wishful thinking than actual fact—Sanger hoped that ads may bring him his job back. But it was enough to spurn Enyedy into action. The Wikipedia Revolution, author Andrew Lih explains why. “Advertising is the third-rail topic in the community—touch it only if you’re not afraid to get a massive shock.”

By the end of the month, Enyedy had created an independent fork of the Spanish Wikipedia site, along with a list of demands for him to rejoin the project. The list included moving the site from .com to .org domain and moving servers to infrastructure owned by the community and, of course, a guarantee that ads would not be used. Most of these demands would eventually be met, though its hard to tell what influence Enyedy had.

The fork of Wikipedia was both a legally and ideologically acceptable project. Wikipedia’s content is licensed under the Creative Commons license; it is freely open and distributable. The code that runs it is open source. It was never a question of whether a fork of Wikipedia was possible. It was a question of why it felt necessary. And the answer speaks to the heart of the Wikipedia community.

Wikipedia did not begin with a community, but rather as something far more conventional. The first iteration was known as Nupedia, created by Jimmy Wales in early 2000. Wales imagined a traditional encyclopedia ported into the digital space. An encyclopedia that lived online, he reasoned, could be more adaptable than the multi-volume tomes found buried in library stacks or gathering dust on bookshelves.

Wales was joined by then graduate student Larry Sanger, and together they recruited a team of expert writers and editors to contribute to Nupedia. To guarantee that articles were accurate, they set up a meticulous set of guidelines for entries. Each article contributed to Nupedia went through rounds of feedback and was subject to strict editorial oversight. After a year of work, Nupedia had less than a dozen finished articles and Wales was ready to shut the project down.

However, he had recently been introduced to the concept of a wiki, a website that anybody can contribute to. As software goes, the wiki is not overly complex. Every page has a publicly accessible “Edit” button. Anyone can go in and make edits, and those edits are tracked and logged in real time.

In order to solicit feedback on Nupedia, Wales had set up a public mailing list anyone could join. In the year since it was created, around 2,000 people had signed up. In January of 2001, he sent a message to that mailing list with a link to a wiki.

His hope was that he could crowdsource early drafts of articles from his project’s fans. Instead, users contributed a thousand articles in the first month. Within six months, there were ten thousand. Wales renamed the project to Wikipedia, changed the license for the content so that it was freely distributable, and threw open the doors to anybody that wanted to contribute.

The rules and operations of Wikipedia can be difficult to define. It has evolved almost in spite of itself. Most articles begin with a single, random contribution and evolve from there. “Wikipedia continues to grow, and articles continue to improve,” media theorist Clary Shirky wrote of the site in his seminal work Here Comes Everybody, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”

From these seemingly random connections and contributions, a tight knit group of frequent editors and writers have formed at the center of Wikipedia. Programmer and famed hacktivist Aaron Swartz described how it all came together. “When you put it all together, the story become clear: an outsider makes one edit to add a chunk of information, then insiders make several edits tweaking and reformatting it,” described Swartz, adding, “as a result, insiders account for the vast majority of the edits. But it’s the outsiders who provide nearly all of the content.” And these insiders, as Swartz referes to them them, created a community.

“One of the things I like to point out is that Wikipedia is a social innovation, not a technical innovation,” Wales once said. In the discussion pages of articles and across mailing lists and blogs, Wikipedians have found ways to collaborate and communicate. The work is distributed and uneven—a small community is responsible for a large number of edits and refinements to articles—but it is impressively collated. Using the ethos of open source as a guide, the Wikipedia community created a shared set of expectations and norms, using the largest repository of human knowledge in existence as their anchor.

Loosely formed and fractured into factions, the Wikipedia community nevertheless follows a set of principles that it has defined over time. Their conventions are defined and redefined on a regular basis, as the community at the core of Wikipedia grows. When it finds a violation of these principles—such as the suggestion that ads will be plastered on the article they helped they create—they sometimes react strongly.

Wikipedia learned from the fork of Spanish Wikipedia, and set up a continuous feedback loop that has allowed its community to remain at the center of making decisions. This was a primary focus of Katherine Maher, who became exectuvie director of Wikimedia, the company behind Wikipedia, in 2016, and then CEO three years later. Wikimedia’s involvement in the community, in Maher’s words, “allows us to be honest with ourselves, and honest with our users, and accountable to our users in the spirit of continuous improvement. And I think that that is a different sort of incentive structure that is much more freeing.”

The result is a hive mind sorting collective knowledge that thrives independently twenty years after it was created. Both Maher and Wales have referred to Wikipedia as a “part of the commons,” a piece of informational infrastructure as important as the cables that pipe bandwidth around the world, built through the work of community.


Fanfiction can be hard to define. It has been the seeds of subculture and an ideological outlet; the subject of intense academic and philosophical inquiry. Fanfiction has often been noted for its unity through anti-hegemony—it is by its very nature illegal or, at the very least, extralegal. As a practice, Professor Brownen Thomas has put it plainly: “Stories produced by fans based on plot lines and characters from either a single source text or else a ‘canon’ of works; these fan-created narratives often take the pre-existing storyworld in a new, sometimes bizarre, direction.” Fanfiction predates the Internet, but the web acted as its catalyst.

Message boards, or forums, began as a technological experiment on the web, a way of replicating the Usenet groups and bulletin boards of the pre-web Internet. Once the technology had matured, people began to use them to gather around common interests. These often began with a niche—fans of a TV show, or a unique hobby—and then used as the beginning point for much wider conversation. Through threaded discussions, forum-goers would discuss a whole range of things in, around, and outside of the message board theme. “If urban history can be applied to virtual space and the evolution of the Web,” one writer recalls, “the unruly and twisted message boards are Jane Jacobs. They were built for people, and without much regard to profit.”

Some stayed small (and some even remain so). Others grew. Fans of the TV show Buffy the Vampire Slayer had used the official message board of the show for years. It famously took on a life of its own when the boards where shut down, and the users funded and maintained an identical version to keep the community alive. Sites like Newgrounds and DeviantART began as places to discuss games and art, respectively. Before long they were the launching pad for the careers of an entire generation of digital creators.

Fandom found something similar on the web. On message boards and on personal websites, writers swapped fanfiction stories, and readers flocked to boards to find them. They hid in plain sight, developing rules and conventions for how to share among one another without being noticed.

In the fall of 1998, developer Xing Li began posting to a number of Usenet fanfiction groups. In what would come to be known as his trademark sincerity, his message read: “I’m very happy to announce that www.fanfiction.net is now officially open!!!!!! And we have done it 3 weekss ahead of projected finish date. While everyone trick-or-treated we were hard at working debugging the site.”

Li wasn’t a fanfiction creator himself, but he thought he stumbled upon a formula for its success. What made Fanfiction.net unique was that its community tools—built-in tagging, easy subscriptions to stories, freeform message boards for discussions—was built with fandom in mind. As one writer would later describe this winning combination, “its secret to success is its limited moderation and fully-automated system, meaning posting is very quick and easy and can be done by anyone.”

Fanfiction creators found a home at Fanfiction.net, or FF.net as it was often shortened to. Throughout its early years, Li had a nerdy and steadfast devotion to the development of the site. He‘d post sometimes daily to an open changelog on the site, a mix of site-related updates and deeply personal anecdotes. “Full-text searching allows you to search for keywords/phrases within every fanfiction entry in our huge archive,” one update read. “I can‘t get the song out of my head and I need to find the song or I will go bonkers. Thanks a bunch. =)” read another (the song was The Cure‘s “Boys Don’t Cry”).

Li’s cult of personality and the unique position of the site made it immensely popular. For years, the fanfiction community had stuck to the shadows. FF.net gave them a home. Members took it upon themselves to create a welcoming environment, establishing norms and procedures for tagging and discoverability, as well as feedback for writers.

The result was a unique community on the web that attempted to lift one another up. “Sorry. It‘s just really gratifying to post your first fic and get three hits within about six seconds. It‘s pretty wild, I haven’t gotten one bad review on FF.N…” one fanfic writer posted in the site’s early days. “That makes me pretty darn happy :)”

The reader and writer relationship on FF.net was fluid. The stories generated by users acted as a reference for conversation among fellow writers and fanfiction readers. One idea often flows into the next, and it is only through sharing content that it takes on meaning. “Yes, they want recognition and adulation for their work, but there‘s also the very strong sense that they want to share, to be part of something bigger than themselves. There’s a simple, human urge to belong.”


As the dot-com era waned, community was repackaged and resold as the social web. The goals of early social communities were looser than the tight niches and imaginative worlds of early community sites. Most functioned to bring one’s real life into digital space. Classmates.com, launched in 1995, is one of the earliest examples of this type of site. Its founder, Randy Conrads, believed that the web was best suited for reconnecting people with their former schoolmates.

Not long after, AsianAve launched from the chaotic New York apartment where the site‘s six co-founders lived and worked. Though it had a specific demographic—Asian Americans—AsianAve was modeled after a few other early social web experiences, like SixDegrees. The goal was to simulate real life friend groups, and to make the web a fun place to hang out. “Most of Asian Avenue‘s content is produced by members themselves,” an early article in The New York Times describes. “[T]he site offers tool kits to create personal home pages, chat rooms and interactive soap operas.” Eventually, one of the site‘s founders, Benjamin Sun, began to explore how he could expand his idea beyond a single demographic. That’s when he met Omar Wasow.

Wasow was fascinated with technology from a young age. When he was a child, he fell in love first with early video games like Pong and Donkey Kong. By high school, he made the leap to programmer. “I begged my way out of wood shop into computer science class. And it really changed my life. I went to being somebody who consumed video games to creating video games.”

In 1993, Wasow founded New York Online, a Bulletin Board System that targeted a “broad social and ethnic ‘mix’,” instead of pulling from the same limited pool of upper-middle class tech nerds most networked projects focused on. To earn an actual living, Wasow developed websites for popular magazine brands like Vibe and Essence. It was through this work that he crossed paths with Benjamin Sun.

By the mid-1990‘s, Wasow had already gathered a loyal following and public profile, featured in magazines like Newsweek and Wired. Wasow’s reputation centered on his ability to build communities thoughtfully, to explore the social ramifications of his tech before and while he built it. When Sun approached him about expanding AsianAve to an African American audience, a site that would eventually be known as BlackPlanet, he applied the same thinking.

Wasow didn’t want to build a community from scratch. Any site that they built would need to be a continuation of the strong networks Black Americans had been building for decades. “A friend of mine once shared with me that you don’t build an online community; you join a community,” Wasow once put it, “BlackPlanet allowed us to become part of a network that already had centuries of black churches and colleges and barbecues. It meant that we, very organically, could build on this very powerful, existing set of relationships and networks and communities.”

BlackPlanet offered its users a number of ways to connect. A central profile—the same kind that MySpace and Facebook would later adopt—anchored a member’s digital presence. Chat rooms and message boards offered opportunities for friendly conversation or political discourse (or sometimes, fierce debate). News and email were built right into the app to make it a centralized place for living out your digital life.

By the mid-2000’s BlackPlanet was a sensation. It captured a large part of African Americans who were coming online for the first time. Barack Obama, still a Senator running for President, joined the site in 2007. Its growth exploded into the millions; it was a seminal experience for black youth in the United States.

After being featured on a segment on the The Oprah Winfrey Show, teaching Oprah how to use the Internet, Wasow‘s profile reached soaring heights. The New York Times dubbed him the “philosopher-prince of the digital age,” for his considered community building. “The best the Web has to offer is community-driven,” Wasow would later say. He never stopped building his community thoughtfully. and they in turn, became an integral part of the country’s culture.

Before long, a group of developers would look at BlackPlanet and wonder how to adapt it to a wider audience. The result were the web’s first true social networks.


The post Chapter 9: Community appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Chapter 8: CSS

In June of 2006, web developers and designers from around the world came to London for the second annual @media conference. The first had been a huge success, and @media 2006 had even more promise. Its speaker lineup was pulled from some of the most exciting and energetic voices in the web design and browser community.

Chris Wilson was there to announce the first major release to Microsoft’s Internet Explorer in nearly half a decade. Rachel Andrew and Dave Shea were swapping practical tips about CSS and project management. Tantek Çelik was sharing some of his recent work on microformats. Molly Holzschlag, Web Standards Project lead at the time, prepared an illuminating talk on internationalization and planned to join a panel about the latest developments of CSS.

The conference kicked off on Thursday with a keynote talk by Eric Meyer, a pioneer and early adopter of CSS. The keynote’s title slide read “A Decade of Style.” In a captivating and personal talk, Meyer recounted the now decade-long history of Cascading Style Sheets, or CSS. His own professional history intertwined and inseparable from that of CSS, Meyer used his time on the stage to look at the language’s roots and understand better the decisions and compromises that had led to the present day.

At the center of his talk, Meyer unveiled the secret to the success of CSS: “Never underestimate the effect of a small, select group of passionate experts.” CSS, the open and accessible design language of the Web, thrived not because of the technology itself, but because of people—the people who built it (and built with it) and what they shared as they learned along the way. The history of CSS, Meyer concluded, is the history of the people who made it.

Fifteen years after that talk, and nearly three decades after its creation, that is still true.


On Thursday morning, October 20th, 1994, attendees of another conference, the Second International WWW Conference, shuffled into a room on the second floor of the Ramada Hotel in Chicago. It was called the Gold Room. The Grand Hall across the way was quite a bit larger—reserved for the keynote presentations on the day—but the Gold Room would work just fine for the relatively smaller group that had managed to make the early morning 8:30 a.m. panel.

Most in attendance that morning would have been exhausted and bleary-eyed, tired from late-night networking events that had spanned the previous three nights. Thursday was Developer Day, the final day of the conference.

The Chicago conference had been preceded six months earlier by the first WWW conference in Geneva. The contrast would have been immediately apparent. Rather than breakout sessions focused on standards and specs, the halls buzzed with industry insiders and commercial upstarts selling their wares. In a short amount of time, the Web had gone mainstream. The conference in Chicago reflected that shift in tone: it was an industry event, with representatives from Microsoft, HP, Silicon Graphics, and many more.

The theme of the conference was “Mosaic and the Web,” and the site of Mosaic’s creation, NCSA, had helped to organize the event. It was a fact made more dramatic by a press release from Netscape, a company mostly staffed by former NCSA employees, just days earlier. The first version of their browser—dramatically billed as “Mosaic killer”—was not only in beta, but would be free upon release (a decision that would later be reversed). Most members of the Netscape team were in attendance, in commercial opposition of their former employer and biggest rival.

The grand intrigue of commercial clashes somewhat overshadowed the first morning session on the last day of the conference, “HTML and SGML: A Technical Presentation.” This, in spite of the fact that the Web’s creator, Sir Tim Berners-Lee, was leading the panel. The final presenter was Håkon Wium Lie, who worked with Berners-Lee and Robert Calliau at CERN. It was about a new proposal for a design language that Lie was calling Cascading HTML Style Sheets. CHSS for short.

The proposal had come together in a hurry. A conversation with standards editor Dave Ragget helped convince Lie of the urgency. Running right up to the deadline, Lie had posted the first draft of his proposal ten days before the conference.


Lie had come to the Web early and enthusiastically. Early enough to have used Nicola Pellow’s line-mode browser to telnet into the very first website. And enthusiastic enough to join Berners-Lee and the web team at CERN shortly after graduating from the MIT media lab in 1992. “I heard the big bang and came running,” is how Lie puts it.

Hakon Wium Lie (Credit: Heinrich-Böll-Stiftung)

Not long after he began at CERN, the language of the web shifted. Realizing that the web’s audience could not stare at black text on a white background all day, the makers of Mosaic introduced a tag that let website creators add inline images to their website. Once the gate was open, more features rushed out. Mosaic added even more tags for colors and fonts and layout. Lie, and the team at CERN, could only sit on the sidelines and watch, a fact Lie would later comment on, saying, “It was like: ‘Darn, we need something quick, otherwise they’re going to destroy the HTML language.’”

The impending release of Netscape in 1994 offered no relief. Marc Andreessen and his team at Netscape promised a consumer-focused web browser. Berners-Lee had developed HTML—the singular language of the web—to describe documents, not to design them. To fill that gap, browsers stuffed the language of HTML with tags to allow designers to create dynamic and stylized websites.

The problem was, there was not yet a standard way of doing this. So each browser added what they felt was necessary and others were forced to either follow suit or go their own way. “As soon as images were allowed inline in HTML documents, the web became a new graphical design medium,” programmer and soon-to-be W3C member Chris Lilley posted to www-talk around that time, “If style sheets or similar information are not added to HTML, the inevitable price will be documents that only look good on a particular browser.”

Lie’s proposal—which he began working on almost as soon as he joined up at CERN—was for a second language. CHSS used style sheets: separate documents that described the visual design of HTML without affecting its structure. So you could change your HTML and your style sheet stayed the same. Change the style sheet and HTML stayed the same. Content lived in one place, and presentation in another.

There were other style sheet proposals. Rob Raisch from O’Reilly and Viola creator Pei-Yuan Wei each had their own spin. Working at CERN, where the web had been created, helped boost the profile of CHSS. Its relative simplicity also made it appealing to browser makers. The cascade in Cascading HTML Style Sheets, however, set it apart.

Each person experiences the web through a prism of their own experience. It is viewed through different devices, under different conditions. On screen readers and phones and on big screen TVs. One’s perception of how a page should look based on their situation runs in stark contrast to both the intent of the website’s author and the limitations and capabilities of browsers. The web, therefore, is chaotic. Multiple sources mingle and compete to decide the way each webpage is perceived.

The cascade brings order to the web. Through a simple set of rules, multiple parties—the browser, the user, and the website author—can define the presentation of HTML in separate style sheets. As rules flow from one style sheet to the next, the cascade balances one rule against another and determines the winner. It keeps design for the web simple, inheritable, and embraces its natural unstable state. It has changed over time, but the cascade has made the web adaptable to new computing environments.

After Lie gave his presentation on the second floor of the Ramada Hotel in Chicago, it was the cascade that monopolized discussions. The makers of the web used the CHSS proposal as a springboard for a much wider conversation about author intent and user preferences. In what situation, in other words, the author of a website’s design should override the preference of a user or the determination of a browser. Productive debate spilled outside of the room and onto the www-talk mailing list, where it was picked up by Bert Bos.

Bert Box speaking in front of a presentation slide.
Bert Bos (Credit: dotConferences)

Bos was a Dutch engineer, studying mathematics at the University of Groningen in the Netherlands. Before he graduated, he created a browser called Argo, a well-known and useful tool for several of the University’s departments. Argo was notable for two reasons. The first was that it included an early iteration of what would later be known as applets. The second was that it included Bos’ own style sheet implementation, one that was not too unlike CHSS. He recognized an opportunity.

“Most of the content of CSS1 was discussed on the whiteboard in Sophia-Antipolis in July 1995… Whenever I encounter difficult technical problems, I think of Bert and that whiteboard.”

Hakon Wium Lie

Lie and Bos began working together, merging their proposals into something more refined. The following year, in the spring of 1995, the third WWW conference was held in Darmstadt, Germany. Netscape, having just been released six months earlier, was already coasting on a new wave of popularity led by their new CEO Jim Barksdale. A few months away from the most successful IPO in history, Netscape would soon launch itself into the stratosphere, with the web riding shotgun, still adding new, non-standard HTML features whenever they could.

Lie and Bos had only ever communicated remotely. In Germany, they met in person for the first time and gave a joint presentation on a new proposal for Cascading Style Sheets, CSS (the H dropped by then).

It stood in contrast to what was available at the time. With only HTML at their disposal, web designers were forced to create “page layout via tables and Netscapisms like FONT SIZE,” as one Suck columnist wrote at the time, later quoted in a dissertation written by Lie. Table-bloated webpages were slow to load, and difficult to understand by accessible devices like screen readers. CSS solved those issues. That same writer, though not believing in its longevity, praised CSS for its “simple elegance, but also… its superfluousness and redundancy.”

Shortly after the conference, Bos joined Lie at the W3C. They began drafting a specification that summer. Lie recalls the frenzied and productive work they did fondly. “Most of the content of CSS1 was discussed on the whiteboard in Sophia-Antipolis in July 1995… Whenever I encounter difficult technical problems, I think of Bert and that whiteboard.”


Chris Wilson, in 1995, was already something of an expert in browsers. He had worked at NCSA on the Mosaic team, one of two programmers who created the Windows version. In the basement of the NCSA lab, Wilson was an eager participant in the conversations that helped define the early web.

Most of his colleagues at NCSA packed up and moved to Silicon Valley to work on Netscape’s Mosaic killer. Wilson chose something different. He settled farther north, in Seattle. His first job was with Spry, working on a Mosaic-licensed browser for their Internet In a Box package. However, as an engineer it was hard for Wilson to avoid the draw of Microsoft in Seattle. By 1995, he worked there as a software developer, and by 1996, he was moved to the Internet Explorer team just ahead of the browser’s version 2 release.

Internet Explorer was Microsoft’s late entry to the browser market. Bill Gates had notoriously sidestepped the Internet and the web for years, before completely reversing his company’s position. In that time, Netscape had captured a swiftly expanding market that didn’t exist when they started. They had released two wildly successful versions of their user-friendly, cross-platform browser. Their window to the web was adorned with built-in email, an easy install process, and a new language called JavaScript that let developers add lively animations to a web that had been previously inert.

Microsoft offered comparatively little. Internet Explorer began as a port of Mosaic, but by the time Wilson signed on, it rested on a rewritten codebase. Besides a few built-in native Microsoft features that appealed to the enterprise market, Internet Explorer had been unable to set themselves apart from the sharp focus and pace of Netscape.

Microsoft needed a differentiator. Wilson thought he had one. “There’s this thing called style sheets,” Wilson recalls telling his boss at the time, “it lets you control the fonts and you and you get to make really pretty looking pages, Netscape isn’t even looking at this stuff.” Wilson got approval to begin working on CSS on the spot.

At the time, the CSS specification wasn’t yet complete. To bridge the gap of how things were supposed to work, Wilson met regularly with Lie, Bos, and other members of the W3C. They would make edits to their draft specification, and Wilson would try it out in his browser. Rinse and repeat. Later, they even brought Vidur Apparao from Netscape into their discussions, which became more formal. Eventually, they became the CSS Working Group.

Internet Explorer 3 was released in August of 1996. It was the first browser to have any support for CSS, a language that hadn’t yet been formally recommended by the W3C. Later, that would become an issue. “There are still a lot of IE3s out there,” Lie would later say a few years after its initial release, “and since they don’t conform to the specification, it’s very hard to write a style sheet that will work well with IE3 while also working well with later browsers.”

Screenshot of a page opened in Internet Explorerversion 3. There's an illustration of a brown dog with a blue floppy disk in its mouth. Internet Explorer 3 information is open in a separate window on the right.
Internet Explorer 3 (Credit: My Internet Explorer)

At the time, however, it was imminently necessary. A working version of CSS powered by a browser at the largest tech company in the world lent stability. Table-based layouts and Netscape-only tags were still more widely adopted, but CSS now stood a chance.

By 1997, the W3C split the HTML working group into three parts, with CSS getting its own dedicated group formed from the ad-hoc Internet Explorer 3 team. It would be chaired by Chris Lilley, who came to the web as a computer graphics specialist. Lilley had pointed out years earlier the need for a standardized web technology for design. At the W3C, he would lead the effort to do just that.

The first formal Recommendation of CSS was published in December of 1997. Six months later, CSS version 2 was released.

As chair of the working group, Lilley was active on the www-talk mailing list. He’d often solicit advice or answer questions from developers. On one such exchange, he received an email from one Eric Meyer. “Hey, I threw together these test pages, I don’t know if you’d be interested in them,” was how Meyer remembers the message, adding that he didn’t realize that “there was nothing else quite like it in existence.”


Eric Meyer was at the web conference in Chicago where Håkon Lie first demoed CSS, though not at the session. He didn’t get a chance to actually see CSS until a few years later, at the fifth annual Web Conference in Paris. He was there to present a paper on web technology he had developed while working as the Case Western webmaster. His real purpose there, however, was to discover the probable future of the web.

He attended one panel featuring Håkon Lie and Bert Bos, alongside Dave Raggett. They each spoke to the capabilities of CSS as part of the W3C specification. Chris Wilson was there too, nursing a bit of a cold but nevertheless emphatically demoing a working version of CSS in Internet Explorer 3. “I’d never even heard of CSS before, but by the time that panel was over, the top of my head felt like it had blown off,” Meyer would later say, “I was instantly sold. It just felt right.”

Eric A. Meyer (Credit meyerweb.com)

Meyer got home and began experimenting with CSS. But he quickly hit a wall. He had a little more than a spec to go off of—there wasn’t such a thing as formal documentation or CSS tutorials—but something felt off. He’d code a bit of CSS and expect it to work one way, and it’d work another.

That’s when he began to pull together test pages. Meyer would isolate his code to a single feature of CSS. Then he’d test that across browsers, and document their inconsistencies, alongside how he thought they should work. “I think it was mostly the sheer joy of crawling through a new system, pulling it apart, figuring out how it worked, and documenting what worked and what didn’t. I don’t know exactly why those kinds of things excite me, but they do.” Over the years, Meyer has built a career on top of this type of experimentation.

Those test pages—posted to Meyer’s website and later to other blogs—carefully arranged and unknowingly documented the proper implementation of CSS according to its specification. Once Chris Lilley got a hold of them, the CSS Working Group helped Meyer transform them into the official W3C CSS Test Suite, an important tool to assist browsers working to introduce CSS.

Test pages and tutorials on Meyer’s personal site soon became regular columns on popular blogs. Then O’Reilly approached him about writing a book, which eventually became CSS: The Definitive Guide. Research for the book connected Meyer to the people that were building CSS inside of the W3C and browsers. He, in turn, shared what he learned with the web development community. Before long, Meyer had cemented a legacy as a central figure in the history of CSS.

His work continued. When the Web Standards Project reached out to programmer John Allsopp to form a committee dedicated to CSS, he immediately thought of Meyer. Meyer was joined by Allsopp and several others: Sue Sims, Ian Hickson, David Baron, Roland Eriksson, Ken Gunderson, Brade McDaniel, Liam Quinn and Todd Fahrner. Collectively, their official title was the CSS Action Committee, but they often went by CSS Samurai.

CSS was a properly standardized design language. If done right, it could shake loose the Netscape-only features and table-based layouts of the past. But browsers were not catching up to CSS quick enough for some developers. And when they did, it was frequently an afterthought. “You really can’t imagine, unless you lived through it, just how buggy and inconsistent and frustrating browser support for CSS was,” Meyer would later recall. The goal of the CSS Samurai was to fix that.

The committee took a familiar Web Standards Project approach, publishing public reports about lack of browser support on the one hand, and privately meeting with browser makers to discuss changes on the other. A third objective of the committee was to speak to developers directly. Grassroots education became a central goal to the work of the CSS Samurai, an effective instrument of change from the ground up.

Netscape provided the greatest hurdle. Wholly dependent on JavaScript, Netscape used a non-standard version of CSS known as JSSS, a language which by now has been largely forgotten. The browser processed style sheets dynamically using JavaScript to render the page, which made its support uneven and often slow to load. It would not be until the release of the Gecko rendering engine in the early 2000’s, that JSSS would be removed. As Netscape transformed into Mozilla in the wake of that change, it would finally come around to a functional CSS implementation.

But with other browsers, particularly with versions of Internet Explorer that were capturing larger segments of the market, WaSP proved successful. The hearts and minds of developers were with them, as they entered a new era of styling on the web.


There was at least one conversation over coffee that saved CSS. There may have been more, but the conversation in question happened in 1999, between Todd Fahrner and Tantek Çelik. Fahrner was a member of the Web Standards Project and a CSS Samurai, often on the front-lines of change. Among untold work with and for the web, he helped Meyer with the CSS Test Suite and developed a practical litmus test for CSS support known as the Acid Test.

Çelik worked at Microsoft. He was largely responsible for bringing web standards support into Internet Explorer for Mac, years before other major browsers would do the same. Çelik would have a long and lasting impact on the development of CSS. He would soon join the Web Standards Project Steering Committee. Later, as a member of the CSS Working Group, he would contribute and help edit several specifications.

On that particular day, over coffee, the topic of conversation was the web’s existential crisis. For years, browsers had added ad-hoc, uneven and incompatible versions of CSS. With a formalized Recommendation from the W3C, there was finally an objectively correct way of doing things. But if browsers took the new, correct rules from the W3C and applied them to all of the sites that had relied on the old, incorrect rules from before, they would suddenly look broken.

What they needed was a toggle. Some sort of switch that developers could turn on to signal that they wanted the new, correct rules. That day, Fahrner proposed using the doctype declaration. It’s a bit of text at the top of the HTML page that specifies a document type definition (the one Dan Connolly had spent years at the W3C standardizing). The practice became known as doctype switching. It meant that new sites could code CSS the right way, and old sites would continue to work just fine.

When Internet Explorer for Mac version 5 was released, it included doctype switching. Before long, all the browsers did. That swung the door open for standards-compliant CSS in browsers.


“We have not learned to design the Web.” So read the first line of the introduction of Molly Holzschlag’s 2003 book Cascading Style Sheets: The Designer’s Edge. It was a bold statement, not the first or the last from Holzschlag—who has had a profound and lasting impact on the evolution of the web. Throughout her career Holzschlag has been a restless advocate for people that use the web, even when that has clashed with makers of web technology. Her decades long history with the web has spanned well beyond CSS, to almost every aspect of its development and evolution.

Holzschlag goes on. “To get to this point in the web’s history, we’ve had to borrow guidelines from other media, hack and workaround our way through browser inconsistencies, and bend markup so far out of its normal shape that we’ve broken it.”

Molly Holzschlag

At the end of 2000, Netscape released the sixth version of their browser. Internet Explorer 6 came out not long after. The style sheets for these browsers were far more capable than any that had come before. But Microsoft wouldn’t release another browser for five years. Netscape, all but defeated by Microsoft, would take years to regroup and reform as the more capable and standards-compliant Firefox.

The work of the Web Standards Project and the W3C had brought a working version of CSS to the web. But it was incomplete, and often difficult to understand. And developers had to take older browsers into account, which many people still used.

In the early 2000’s, creators of the web were caught between a past riddled with inconsistency and a future that captured their imagination. “Designers and developers were pushing the bounds of what browsers were capable of,” web developer Eevee recalls about using CSS at the time, “Browsers were handling it all somewhat poorly. All the fixes and workarounds and libraries were arcane, brittle, error-prone, and/or heavy.”

Most web designers continued to rely on a combination of HTML table hacks and Netscape-specific tags to create advanced designs. Level two of CSS offered even more possibilities, but designers were hesitant to go all in and risk a bad experience for Netscape users. “Netscape Navigator 4 was holding everyone back,” developer Dave Shea would later say, “It just barely supported CSS, and certainly not in any capacity that we could start building completely table-less sites. And the business case for continued support was too strong to ignore.”

Beneath the surface, however, a vibrant and influential community spread new ideas through blogs and mailing lists and books. That community introduced clever solutions with equally clever names. The “Holly Hack” and “clearfix” from the Position is Everything, maintained by Holly Bergevin and John Gallant. Douglas Bowman’s “Sliding Doors of CSS,” Dan Webb and Patrick Griffith’s “Suckerfish Dropdowns” and Dan Ciederholm’s “Faux Columns” all came from Jeffrey Zeldman’s A List Apart blog. Even Meyer and Allsopp created the CSS Discuss mailing list as a workshop for innovative ideas and practice.

“It’s going to be the people using CSS in the next few years who will come up with the innovative design ideas we need to help drive the potential of the Web in general.”

Molly Holzschlag

And yet, so much of the energy of that community was spent on hacks and workarounds and creative solutions. The most interesting design ideas came always attached with a caveat, a bit of code to make it work in this browser or that. The first edition of CSS Anthology **by Rachel Andrew, which became a handbook for many CSS developers, featured an entire chapter on what to do about Netscape 4.

The innovators of CSS—beset by disparities difficult to explain—were forced to pick apart the language and find a way through to their designs. In the wake of that newness came a creative surge. Some of the most expressive and shrewd designs in the web’s history came out of this era.

That very same community, however, often fell to a collective preoccupation with what they could make CSS do. A culture that, at times, overvalued hacks and workarounds. Largely out of necessity, shared education focused on the how rather than the why. Too-clever techniques that sometimes outpaced their usefulness.

That would begin to change. Holzschlag ended the introduction to her book on CSS with a nod to the future. “It’s going to be the people using CSS in the next few years who will come up with the innovative design ideas we need to help drive the potential of the Web in general.”


Dave Shea was an ideological disciple of the Web Standards Project, an active member of a growing CSS community. He agreed with Holzschlag. “We entered a period where individuals could help shape the future of the web,” he would later describe the moment. Like others, he was frustrated with the limitations of browsers without CSS support.

The antidote to this type of frustration was often to have a bit of fun. Though getting larger by the day, the web design community was small and familiar. For some, it became a hobby to disseminate inspiration. Domino Shriver compiled a list of CSS designs in his site, WebNoveau, later maintained by Meryl Evans. Each day, new web pages designed with CSS would be posted to its homepage. Chris Casciano’s Daily CSS Fun amended that approach. Each day he’d post a new style sheet for the same HTML file, capturing the wide range of designs CSS made possible. In May of 2003, Shea produced his own take on the format when he launched the CSS Zen Garden. The project rested on a simple premise. Each page used exactly the same HTML file with exactly the same content. The only thing that was different was the page’s style sheet, the CSS that was applied to that HTML. Rather than create them himself, Shea solicited style sheets from developers all over the world to create a digital gallery of CSS inspiration. Designs ranged from constructed minimalism to astonishingly baroque. It was a playground to explore what was possible.

At once a source of influence, a practical demonstration of CSS advantages, and a showcase of great web design, the Zen Garden spread to the far ends of the web. What began with five designs soon turned into a website filled with dozens of different designs. And then more. “Hundreds of designers have made their mark—and sometimes their reputations—by creating Zen Garden layouts,” author Jeffrey Zeldman would later say in his book Designing with Web Standards, “and tens of thousands all over the world have learned to love CSS because of it.”

Though Zen Garden would become the most well-known, it was only one contribution to a growing oeuvre of inspiration projects on the web. Web creators wanted to look to the future.

In 2005, Shea published a book based on the project with Molly Holzschlag called The Zen of CSS Design. By then, CSS had web designers’ full attention.


In 1998, in an attempt to keep pace with Microsoft, Netscape made the decision to release their browser for free, and to open source its source code under a newly formed umbrella project known as Mozilla that would ultimately lead to the release of the Firefox browser in 2003.

David Baron and Ian Hickson both began their careers at Mozilla in the late 1990’s as volunteers, and later interns, on the Mozilla Quality Assurance team, identifying standards-compliance bugs. It was through the course of their work that they became deeply familiar not just with how CSS was supposed to work, but how, in practice, it was being used inside of a standards-driven browser. During that time, Hickson and Baron became an integral part of a growing CSS community, and joined the CSS Samurai. They helped write and run the tests for the CSS Test Suite. They became active participants in the www-style mailing list, and later, the CSS Working Group itself.

While Meyer was writing his first book, CSS: The Definitive Guide, he recalls asking Baron and Hickson for help in understanding how some parts of CSS worked. “I doubt that I will ever stop owing them for their dedication to getting me through the wilderness of my own misunderstandings,” he would later say. It was their attention to detail that would soon make them an incredible asset.

Browsers understand style sheets, the language of CSS, based on the words of the specifications at the W3C. If the language is not specific enough, or if not every edge case or feature combination has been considered, this can lead to incompatibilities among browsers. While working at the W3C, Hickson and Baron helped bring the vague language of its technical specifications into clearer focus. They made the definition of CSS more precise, consistent, and easier to implement correctly.

Their work, alongside Bert Bos, Tantek Çelik, Håkon Lie and others, led to a substantial revision of the second version of CSS, what CSS Working Group member Elika Etemad would later describe as “a long process of plugging the holes, fixing errors, and building test suites for the core CSS standard.” It was tireless work, as much about conversation with browser programmers as actual technical work and writing.

It was also a job nobody thought would take very long. There had been two versions of CSS released in a few years. A minor revision was expected to take a fraction of the time. One night at a conference a few months in, several CSS editors commented that if they stayed up late one night, they might be able to get it done before the next day. Instead, the work would take nearly a decade.

For years, Elika Etemad, then known only as ‘fantasai’, had been an active member of the www-style mailing list and Mozilla bug tracker. It had put her in conversations with browser makers, and members of the W3C. Though she had spoken with many different members of the CSS Working Group over the years, some of her most engaged and frequent discussions were with David Baron and Ian Hickson. Like Hickson and Baron, ‘fantasai’ was uncovering bugs and spec errors that no one else had noticed—and happily reporting what she found.

Elike Etemad speaking in front of a podium for CSS Day.
Elika Etemad (Credit: Web Conferences Amsterdam)

That work earned her an invite to the W3C Technical Plenary in 2004. Each year, members of the W3C working groups travel to shifting locations (2020 was the first year it was held virtually) for the event. W3C discussions are mostly done through emails and conference calls and editorial comments. For some members, the plenary is the only time they see each other face to face all year. In 2004, it was held in the south of France, in a town called Mandelieu-la-Napoule, overlooking the Bay of Cannes. It was there that Etemad met Baron and Hickson in person for the first time.

The CSS Working Group, several years into their work on CSS 2.1, invited Etemad to join them. Microsoft had all but pulled back from the standards process after the release of Internet Explorer 6 in 2001. The working group had to work with actively developed browsers like Mozilla and Opera while constrained by the stagnant IE6. They spent years ironing out the details, always feeling on the verge of completion. “We’re almost out of issues, and the new issues we are getting are usually minor stuff like typo fixes and so forth,” Hickson posted in 2006, still years away from a final specification.

During this time, the CSS Working Group was also working on something new. Hickson and Baron had learned from CSS 2.1, an exhaustive but monolithic specification. “We succeeded,” Hickson would later comment, “but boy are they insanely complicated. What we should have done instead is just break the constraints and come up with something simpler, ideally something that more closely matched what browsers implemented at the time.” Over time, the CSS Working Group began to shift their approach. Specifications would no longer be a single, immutable document. It would change over time to accommodate real-world browser implementations.

Beginning with CSS3, also transitioned to a new format to cover a wider set of features and maintain pace with browser development. CSS3 consists of a number of modules, each that addresses a single area of functionality—including color, font, text, and more advanced concepts like media queries. “Some of the CSS3 modules out there are ‘concept albums,’” ‘fantasai’ describes, “specs that are sketching out the future of CSS.” These “concepts” are developed independently and at a variable pace. Each CSS3 module has its own editors. Collectively, they have contributed to a bolder vision of CSS. Individually, they are developed alongside real-world browser implementations and, on their own, can more deftly adapt to change.

The modular approach to CSS3 would prove effective. The second decade of CSS would introduce sweeping changes and refreshing new features. The second decade of CSS would be different than the first. New features would lead to new designs, and eventually, a new web.


The post Chapter 8: CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 7: Standards

It was the year 1994 that the web came out of the shadow of academia and onto the everyone’s screens. In particular, it was the second half of the second week of December 1994 that capped off the year with three eventful days.

Members of the World Wide Web Consortium huddled around a table at MIT on Wednesday, December 14th. About two dozen people made it to the meeting, representatives from major tech companies, browser makers, and web-based startups. They were there to discuss open standards for the web.

When done properly, standards set a technical lodestar. Companies with competing interests and priorities can orient themselves around a common set of agreed upon documentation about how a technology should work. Consensus on shared standards creates interoperability; competition happens through user experience instead of technical infrastructure.

The World Wide Web Consortium, or W3C as it is more commonly referred to, had been on the mind of the web’s creator, Sir Tim Berners-Lee, as early as 1992. He had spoken with a rotating roster of experts and advisors about an official standards body for web technologies. The MIT Laboratory for Computer Science soon became his most enthusiastic ally. After years of work, Berners-Lee left his job at CERN in October of 1994 to run the consortium at MIT. He had no intention of being a dictator. He had strong opinions about the direction of the web, but he still preferred to listen.

W3C, 1994

On the agenda — after the table had been cleared with some basic introductions — was a long list of administrative details that needed to be worked out. The role of the consortium, the way it conducted itself, and its responsibilities to the wider web was little more than sketched out at the beginning of the meeting. Little by little, the 25 or so members walked through the list. By the end of the meeting, the group felt confident that the future of web standards was clear.

The next day, December 15th, Jim Clark and Marc Andreessen announced the recently renamed Netscape Navigator version 1.0. It had been out for several months in beta, but that Thursday marked a wider release. In a bid for a growing market, it was initially given away for free. Several months later, after the release of version 1.1, Netscape would be forced to walk that back. In either case, the browser was a commercial and technical success, improving on the speed, usability, and features of browsers that had come before it.

On Friday, December 16th, the W3C experienced its first setback. Berners-Lee never meant for MIT to be the exclusive site of the consortium. He planned for CERN, the birthplace of the web and home to some of its greatest advocates, to be a European host for the organization. On December 16th, however, CERN approved a massive budget for its Large Hadron Collider, forcing them to shift priorities. A refocused budget left little room for hypertext Internet experiments not directly contributing to the central project of particle physics.

CERN would no longer be the European host of the W3C. All was not lost. Months later, the W3C set up at France’s National Institute for Research in Computer Science and Control, or INRIA. By 1996, a third site at Japan’s Keio University would also be established.

Far from an outlier, this would not be the last setback the W3C ever faced, or that it would overcome.


In 1999, Berners-Lee published an autobiographical account of the web’s creation in a book entitled Weaving the Web. It is a concise and even history, a brisk walk through the major milestones of the web’s first decade. Throughout the book, he often returns to the subject of the W3C.

He frames the web consortium, first and foremost, as a matter of compromise. “It was becoming clear to me that running the consortium would always be a balancing act, between taking the time to stay as open as possible and advancing at the speed demanded by the onrush of technology.” Striking a balance between shared compatibility and shorter and shorter browser release cycles would become a primary objective of the W3C.

Web standards, he concedes, thrives through tension. Standards are developed amidst disagreement and hard-won bargains. Recalling a time just before the W3C’s creation, Berners-Lee notes how the standards process reflects the structure of the web. “It struck me that these tensions would make the consortium a proving ground for the relative merits of weblike and treelike societal structures,” he wrote, “I was eager to start the experiment.” A web consortium born of compromise and defined by tension, however, was not Berners-Lee’s first plan.

In March of 1992, Berners-Lee flew to San Diego to attend a meeting of the Internet Engineering Task Force, or IETF. Created in 1986, the IETF develops standards for the Internet, ranging from networking to routing to DNS. IETF standards are unenforceable and entirely voluntarily. They are not sanctioned by any world government or subject to any regulations. No entity is obligated to use them. Instead, the IETF relies on a simple conceit: interoperability helps everyone. It has been enough to sustain the organization for decades.

Because everything is voluntary, the IETF is managed by a labyrinthine set of rules and ritualistic processes that can be difficult to understand. There is no formal membership, though anyone can join (in its own words it has “no members and no dues”). Everyone is a volunteer, no one is paid. The group meets in person three times a year at shifting locations.

The IETF operates on a principle known as rough consensus (and, often times, running code). Rather than a formal voting process, disputed proposals need to come to some agreement where most, if not at all, of the members in a technology working group agree. Working group members decide when rough consensus has been met, and its criteria shifts form year to year and group to group. In some cases, the IETF has turned to humming to take the temperature of a room. “When, for example, we have face-to-face meetings… instead of a show of hands, sometimes the chair will ask for each side to hum on a particular question, either ‘for’ or ‘against’.”

It is against the backdrop of these idiosyncratic rules that Berners-Lee first came to the IETF in March of 1992. He hoped to set up a working group for each of the primary technologies of the web: HTTP, HTML, and the URI (which would later be renamed to URL through the IETF). In March he was told he would need another meeting, this one in June, to formally propose the working groups. Somewhere close to the end of 1993, a year and a half after he began, he had persuaded the IETF to set up all three.

The process of rough consensus can be slow. The web, by contrast, had redefined what fast could look like. New generations of browsers were coming out in months, not years. And this was before Netscape and Microsoft got involved.

The development of the web had spiraled outside Berners-Lee’s sphere of influence. Inline images — a feature maybe most responsible for the web’s success — was a product of a late night brainstorming session over snacks and soda in the basement of a university lab. Berners-Lee learned about it when everyone else did, when Marc Andreessen posted it to the www-talk mailing list.

Tension. Berners-Lee knew that it would come. He had hoped, for instance, that images might be treated differently (“Tim bawled me out in the summer of ’93 for adding images to the thing,” Andreessen would later say), but the web was not his. It was not anybody’s. He had designed it that way.

With all of its rules and rituals, the IETF did not seem like the right fit for web standards. In private discussions at universities and research labs, Berners-Lee had begun to explore a new path. Something like a consortium of stakeholders in the web — a collection of companies that create browsers and websites and software — that can come together to agree upon a rough consensus for themselves. By the end of 1993, his work on the W3C had already begun.


Dave Raggett, a seasoned researcher at Hewlett-Packard, had a different view of the web. He wasn’t from academia, and he wasn’t working on a browser (not yet anyway). He understood almost instinctively the utility of the web as commercial software. Something less like a digital phonebook and more like Apple’s wildly successful Hypercard application.

Unable to convince his bosses of the web’s promise, Raggett used the ten percent of time HP allowed for its employees to pursue independent research to begin working with the web. He anchored himself to the community, an active member of the www-talk mailing list and a regular presence at IETF meetings. In the fall of 1992, he had a chance to visit with Berners-Lee at CERN.

Yuri Rubinsky

It was around this time that he met Yuri Rubinsky, an enthusiastic advocate for Standard General Markup Language, or SGML, the language that HTML was originally based on. Rubinsky believed that the limitations of HTML could be solved by a stricter adherence to the SGML standard. He had begun a campaign to bring SGML to the web. Raggett agreed — but to a point. He was not yet ready to sever ties with HTML.

Each time Mosaic shipped a new version, or a new browser was released, the gap between the original HTML specification and the real world web widened. Raggett believed that a more comprehensive record of HTML was required. He began working on an enhanced version of HTML, and a browser to demo its capabilities. Its working title was HTML+.

Ragget’s work soon began to spill over to his home life. He’d spend most nights “at a large computer that occupied a fair portion of the dining room table, sharing its slightly sticky surface with paper, crayons, Lego bricks and bits of half-eaten cookies left by the children.” After a year of around the clock work, Raggett had a version of HTML+ ready to go in November of 1993. His improvements to the language were far from superficial. He had managed to add all of the little things that had made their way into browsers: tables, images with captions and figures, and advanced forms.

Several months later, in May of 1994, developers and web enthusiasts traveled from all over the world to come to what some attendees would half-jokingly refer to as the “Woodstock of the Web,” the first official web conference organized by CERN employee and web pioneer Robert Calliau. Of the 800 people clamoring to come, the space in Geneva could hold only 350. Many were meeting for the first time. “Everyone was milling about the lobby,” web historian Marc Weber would later describe, “electrified by the same sensation of meeting face-to-face actual people who had been just names on an email or on the www-talk [sic] mailing list.”

Members of the first conference

It came at a moment when the web stood on the precipice of ubiquity. Nobody from the Mosaic team had managed to make it (they had their own competing conference set for just a few months later), but there were already rumors about Mosaic alum Marc Andresseen’s new commercial browser that would later be called Netscape Navigator. Mosaic, meanwhile, had begun to license their browser for commercial use. An early version of Yahoo! was growing exponentially as more and more publications, like GNN, Wired, The New York Times, and The Wall Street Journal, came online.

Progress at the IETF, on the other hand, had been slow. It was too meticulous, too precise. In the meantime, browsers like Mosaic had begun to add whatever they wanted — particularly to HTML. Tags supported by Mosaic couldn’t be found anywhere else, and website creators were forced to chose between cutting-edge technology and compatibility with other browsers. Many were choosing the former.

HTML+ was the biggest topic of conversation at the conference. But another highlight was when Dan Connolly — a young, “red-haired, navy-cut Texan” who worked at the supercomputer manufacturer Convex — took the stage. He gave a talk called “Interoperability: Why Everyone Wins.” Later, and largely because of that talk, Connolly would be made chair of the IETF HTML Working Group.

In a prescient moment capturing the spirit of the room, Connolly described a future when the language of HTML fractured. When each browser implemented their own set of HTML tags in an effort to edge out the competition. The solution, he concluded, was an HTML standard that was able to evolve at the pace of browser development.

Ragget’s HTML+ made a strong case for becoming that standard. It was exhaustive, describing the new HTML used in browsers like Mosaic in near-perfect detail. “I was always the minimalist, you know, you can get it done with out that,” Connolly later said, “Raggett, on the other hand, wanted to expand everything.” The two struck an agreement. Raggett would continue to work through HTML+ while Connolly focused on a more narrow upgrade.

Connolly’s version would soon become HTML 2, and after a year of back and forth and rough consensus building at the IETF, it became an official standard. It didn’t have nearly the detail of HTML+, but Connolly was able to officially document features that browsers had been supporting for years.

Ragget’s proposal, renamed to HTML 3, was stuck. In an effort to accommodate an expanding web, it continued to grow in size. “To get consensus on a draft 150 pages long and about which everyone wanted to voice an opinion was optimistic – to say the least,” Raggett would later put it, rather bluntly. But by then, Raggett was already working at the W3C, where HTML 3 would soon become a reality.


Berners-Lee also spoke at the first web conference in Geneva, closing it out with a keynote address. He didn’t specifically mention the W3C. Instead, he focused on the role of web. “The people present were the ones now creating the Web,” he would later write of his speech, “and therefore were the only ones who could be sure that what the systems produced would be appropriate to a reasonable and fair society.”

In October of 1994, he embarked on his own part in making a more equitable and accessible future for the web. The World Wide Web Consortium was officially announced. Berners-Lee was joined by a handful of employees — a list that included both Dave Raggett and Dan Connolly. Two months later, in the second half of the second week of December of 1994, the members of the W3C met for the first time.

Before the meeting, Berners-Lee had a rough sketch of how the W3C would work. Any company or organization could join given that they pay the membership fee, a tiered pricing structure tied to the size of that company. Member organizations would send representatives to W3C meetings, to provide input into the process of creating standards. By limiting W3C proceedings to paying members, Berners-Lee hoped to focus and scope the conversations to real world implementations of web technologies.

Yet despite a closed membership, the W3C operates in the open whenever possible. Meeting notes and documentation are open to anybody in the public. Any code written as part of experiments in new standards is freely downloadable.

Gathered at MIT, the W3C members had to next decide how its standards would work. They decided on a process that stops just short of rough consensus. Though they are often called standards, the W3C does not create official standards for the web. The technical specifications created at the W3C are known, in their final form, as recommendations.

They are, in effect, proposals. They outline, in great detail, how exactly a technology works. But they leave enough open that it is up to browsers to figure out exactly how the implementation works. “The goal of the W3C is to ensure interpretability of the Web, and in the long range that’s realistic,” former head of communications at the W3C Sally Khudairi once described it, “but in the short range we’re not going to play Web cops for compliance… we can’t force members to implement things.”

Initial drafts create a feedback loop between the W3C and its members. They provide guidance on web technologies, but even as specifications are in the process of being drafted, browsers begin to introduce them and developers are encouraged to experiment with them. Each time issues are found, the draft is revised, until enough consensus has been reached. At that point, a draft becomes a recommendation.

There would always be tension, and Berners-Lee knew that well. The trick was not to try to resist it, but to create a process where it becomes an asset. Such was the intended effect of recommendations.

At the end of 1995, the IETF HTML working group was replaced by a newly created W3C HTML Editorial Review Board. HTML 3.2 would be the first HTML version released entirely by the W3C, based largely on Ragget’s HTML+.


There was a year in web development, 1997, when browsers broke away from the still-new recommendations of the W3C. Microsoft and Netscape began to release a new set of features separate and apart from agreed upon standards. They even had a name for them. They called them Dynamic HTML, or DHTML. And they almost split the web in two.

DHTML was originally celebrated. Dynamic meant fluid. A natural evolution from HTML’s initial inert state. The web, in other words, came alive.

Touting it’s capabilities, a feature in Wired in 1997 referred to DHTML as the “magic wand Web wizards have long sought.” In its enthusiasm for the new technology, it makes a small note that “Microsoft and Netscape, to their credit, have worked with the standards bodies,” specifically on the introduction of Cascading Style Sheets, or CSS, but that most features were being added “without much regard for compatibility.”

The truth on the ground was that using DHTML required targeting one browser or another, Netscape or Internet Explorer. Some developers chose to simply choose a path, slapping a banner at the bottom of their site that displayed “Best Viewed In…” one browser or another. Others ignored the technology entirely, hoping to avoid its tangled complexity.

Browsers had their reasons, of course. Developers and users were asking for things not included in the official HTML specification. As one Microsoft representative put it, “In order to drive new technologies into the standards bodies, you have to continue innovating… I’m responsible to my customers and so are the Netscape folks.”

A more dynamic web was not a bad thing, but a splintered web was untenable. For some developers, it would prove to be the final straw.


Following the release of HTML 3.2, and with the rapid advancement of browsers, the HTML Editorial Review Board was divided into three parts. Each was given a separate area of responsibility to make progress on, independent of the others.

Dr. Lauren Wood (Photo: XML Summer School)

Dr. Lauren Wood became chair of the Document Object Model Working Group. A former theoretical nuclear phycist, Wood was the Director of Product Technology at SoftQuad, a comapny founded by SGML advocate Yuri Rubinsky. While there, she helped work on the HoTMetaL HTML editor. The DOM spec created a standardized way for browsers to implement Dynamic HTML. “You need a way to tie your data and your programs together,” was how Wood described it, “and the Document Object Model is that glue.” Her work on the Document Object Model, and later XML, would have a long-lasting influence on the web.

The Cascading Style Sheets Working Group was chaired by Chris Lilley. Lilley’s background was in computer graphics, as a teacher and specialist in the Computer Graphics Unit at the University of Manchester. Lilley had worked at the IETF on the HTML 2 spec, as well as a specification for Portable Network Graphics (PNG), but this would mark his first time as a working group chair.

CSS was still a relative newcomer in 1997. It had been in the works for years, but had yet to have a major release. Lilley would work alongside the creators of CSS — Håkon Lie and Bert Bos — to create the first CSS standard.

The final working group was for HTML, left under the auspices of Dan Connolly, continuing his position from the IETF. Connolly had been around the web almost as long as Berners-Lee had. He was one of the people watching back in October of 1991, when Berners-Lee demoed the web for a small group of unimpressed people at a hypertext conference in San Antonio. In fact, it was at that conference that he first met the woman that would later become his wife.

After he returned home, he experimented with the web. He messaged Berners-Lee a month later. It was only three words:“You need a DTD.”

When Berners-Lee developed the language of HTML, he borrowed its convention from a predecessor, SGML. IBM developed Generalized Markup Language (GML) in the early 1970’s to make it easier for typists to create formatted books and reports. However, it quickly got out of control, as people would take shortcuts and use whatever version of the tags that they wanted.

That’s when they developed the Document Type Definition, or as Connolly called it, a DTD. DTDs are what added the “S” (Standardized) to GML. Using SGML, you can create a standardized set of instructions for your data, its scheme and its structure, to help computers understand how to interpret it. These instructions are a document type definition.

Beginning with version 2, Connolly added a type definition to HTML. It limited the language to a smaller set of agreed-upon tags. In practice, browsers treated this more as a loose definition, continuing to implement their own DHTML features and tags. But it was a first step.

In 1997, the HTML Working Group, now inside of the W3C, began to work on the fourth iteration of HTML. It expanded the language, adding to the specification far more advanced features, complex tables and forms, better accessibility, and a more defined relationship with CSS. But it also split HTML from a single schema into three different document type definitions for browsers to adopt.

The first, Frameset, was not typically used. The second, Transitional, was there to include the mistakes of the past. It expanded a larger subset of HTML that included non-standard, presentational HTML that browsers had used for years, such as <font> and <center>. This was set as a default for browsers.

The third DTD was called Strict. Under the Strict definition, HTML was pared down to only its standard, non-presentational features. It removed all of the unique tags introduced by Netscape and Microsoft, leaving only structured elements. If you use HTML today, it likely draws on the same base of tags.

The Strict definition drew a line in the sand. It said, this is HTML. And it finally gave a way for developers to code once for every browser.


In the August 1998 issue of Computerworld — tucked between large features on the impending doom of <abbr title=”Year 2000>Y2K, the bristling potential of billing on the World Wide Web, and antitrust concerns about Microsoft — was a small announcement. Its headline read, ”Browser standards targeted.” It was about the creation of a new grassroots organization of web developers aimed at bringing web standards support to browsers. It was called the Web Standards Project.

Glenn Davis, co-creator of the project, was quoted in the announcement. “The problem is, with each generation of the browser, the browser manufacturers diverge farther from standards support.” Developers, forced to write different code for different browsers for years, had simply had enough. A few off-hand conversations in mailing lists had spiraled into a fully grown movement. At launch, 450 developers and designers had already signed up.

Davis was not new to the web, and he understood its challenges. His first experience on the web dated all the way back to 1994, just after Mosaic had first introduced inline images, when he created the gallery site Cool Site of the Day. Each day, he would feature a single homepage from an interesting or edgy or experimental site. For a still small community of web designers, it was an instant hit.

There was no criteria other than sites that Davis thought were worth featuring. “I was always looking for things that push the limits,” was how he would later define it. Davis helped to redefine the expectations of the early web, using the moniker coolas a shorthand to encompass many possibilities. Dot-com Design author and media professor **Megan Ankerson points out what “this ecosystem of cool sites gestured towards the sheer range of things the web could be: its temporal and spatial dislocations, its distinction from and extension of mainstream media, its promise as a vehicle for self-publishing, and the incredible blend of personal, mundane, and extraordinary.” For a time on the web, Davis was the arbiter of cool.

As time went on Davis transformed his site into Project Cool, a resource for creating websites. In the days of DHTML, Davis’ Project Cool tutorials provided constructive and practical techniques for making the most out of the web. And a good amount of his writing was devoted to explaining how to write code that was usable in both Netscape Navigator and Microsoft’s Internet Explorer. He eventually reached a breaking point, along with many others. At the end of 1997, Netscape and Microsoft both released their 4.0 browsers with spotty standards support. It was already clear that upcoming 5.0 releases were planning to lean even further into uneven and contradictory DHTML extensions.

Running out of patience, Davis helped set up a mailing list with George Olsen and Jeffrey Zeldman. The list started with two dozen people, but it gathered support quickly. The Web Standards Project, known as WaSP, officially launched from that list in August of 1998. It began with a few hundred members and announcement in magazines like Computer World. Within a few months, it would have tens of thousands of members.

The strategy for WaSP was to push browsers — publicly and privately — into web standards support. WaSP was not meant to be a hyperbolic name.” The W3C recommends standards. It cannot enforce them,” Zeldman once said of the organization’s strategy, “and it certainly is not about to throw public tantrums over non-compliance. So we do that job.”

A prominent designer and standards advocate, Zeldman would have an enduring influence on makers of the web. He would later run WaSP during some of its most influential years. His website and mailing list, A List Apart, would become a gathering place for designers who cared about web standards and using the latest web technologies.

WaSP would change focus several times during their decade and a half tenure. They pushed browsers to make better use of HTML and CSS. They taught developers how write standards-based code. They advocated for greater accessibility and tools that supported standards out of the box.

But their mission, published to their website on the first day of launch, would never falter. “Our goal is to support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.”

WaSP succeeded in their mission on a few occasions early on. Some browsers, notably Opera, had standards baked in at the beginning; their efforts were praised by WaSP. But the two browsers that collectively made up a majority of web use — Internet Explorer and Netscape Navigator — would need some work.

A four billion dollar sale to AOL in 1998 was not enough for Netscape to compete with Microsoft. After the release of Netscape 4.0, they doubled-down on bold strategy, choosing to release the entire browser’s code as open source under the Mozilla project. Everyday consumers could download it for free; coders were encouraged to contribute directly.

Members of the community soon noticed something in Mozilla. It had a new rendering engine, often referred to as Gecko. Unlike planned releases of Netscape 5, which had patchy standards support at best, Gecko supported a fairly complete version of HTML 4 and CSS.

WaSP diverted their formidable membership to the task of pushing Netscape to include Gecko in its next major release. One familiar WaSP tactic was known as roadblocking. Some of its members worked at publications like HotWired and CNet. WaSP would coordinate articles across several outlets all at once criticizing, for instance, Netscape’s neglect of standards in the face of a perfectly reasonable solution in Gecko. By doing so, they were often able to capture the attention of at least one news cycle.

WaSP also took more direct action. Members were asked to send emails to browsers, or sign petitions showing widespread support for standards. Overwhelming pressure from developers was occasionally enough to push browsers in the right direction.

In part because of WaSP, Netscape agreed to make Gecko part of version 5.0. Beta versions of Netscape 5 would indeed have standards-compliant HTML and CSS, but it was beset with issues elsewhere. It would take years for a release. By then, Microsoft’s dominion over the browser market would be near complete.

As one of the largest tech companies in the world, Microsoft was more insulated from grassroots pressure. The on-the-ground tactics of WaSP proved less successful when turned against the tech giant.

But inside the walls of Microsoft, WaSP had at least one faithful follower, developer Tantek Çelik. Çelik has tirelessly fought on the side of web standards as far back as his web career stretches. He would later become a member of the WaSP Steering Committee and a representative for a number of working groups at the W3C working directly on the development of standards.

Tantek Çelik (Photo: Tantek.com)

Çelik ran a team inside of Internet Explorer for Mac. Though it shared a name, branding, and general features with its far more ubiquitous Windows counterpart, IE for Mac ran on a separate codebase. Çelik’s team was largely left to its own devices in a colossal organization with other priorities working on a browser that not many people were using.

With the direction of the browser largely left up to him, Çelik began to reach out to web designers in San Francisco at the cutting edge of web technology. Through a stroke of luck he was connected to several members of the Web Standards Project. He’d visit with them and ask what they wanted to see in the Mac IE browser. “The answer: better standards support.”

They helped Çelik realize that his work on a smaller browser could be impactful. If he was able to support standards, as they were defined by the W3C, it could serve as a baseline for the code that the designers were writing. They had enough to worry about with buggy standards in IE for Windows and Netscape, in other words. They didn’t need to also worry about IE for Mac.

That was all that Çelik needed to hear. When Internet Explorer 5.0 for Mac launched in 2000, it had across the board support for web standards; HTML, PNG images, and most impressively, one of the most ambitious implementations of the new Cascading Style Sheets (CSS) specification.

It would take years for the Windows version to get anywhere close to the same kind of support. Even half a decade later, after Çelik left to work at the search engine Technorati, they were still playing catch-up.


Towards the end of the millennium, the W3C found themselves at a fork in the road. They looked to their still-recent past and saw it filled with contentious support for standards — Incompatible browsers with their own priorities. Then they looked the other way, to their towering future. They saw a web that was already evolving beyond the confines personal computers. One that would soon exist on TVs and in cell phones and on devices we that hadn’t been dreamed up yet in paradigms yet to be invented. Their past and their future were incompatible. And so, they reacted.

Yuri Rubinsky had an unusual talent for making connections. In his time as a standards advocate, developer, and executive at a major software company, he had managed to find time to connect some of the web’s most influential proponents. Sadly, Rubinsky died suddenly and at a young age in 1996, but his influence would not soon be forgotten. He carried with him an infectious energy and a knack for persuasion. His friend and colleague Peter Sharpe would say upon his death that in “talking to the people from all walks of life who knew Yuri, there was a common theme: Yuri had entered their lives and changed them forever.”

Rubinsky devoted his career to making technology more accessible. He believed that without equitable access, technology was not worth building. It motivated all of the work he did, including his longstanding advocacy of SGML.

SGML is a meta-language and “you use it to build your own computer languages for your own purposes.” If you hand a document over to a computer, SGML is how you can give that computer instructions on how to understand it. It provides a standardized way to describe the structure of data — the tags that it uses and the order it is expected in. The ownership of data, therefore, is not locked up and defined at some unknown level, it is given to everybody.

Rubinsky believed in that kind of universal access, a world in which machines talked to each other in perfect harmony, passing sets of data between them, structured, ordered, and formatted for its users. His company, SoftQuad, built software for SGML. He organized and spoke at conferences about it. He created SGML Open, a consortium not unlike the W3C. “SGML provides an internationally standardized, vendor-supported, multi-purpose, independent way of doing business,” was how he once described it, “If you aren’t using it today, you will be next year.” He was almost right.

He had a mission on the web as well. HTML is actually based on SGML, though it uses only a small part of it. Rubinsky was beginning to have conversations with members of the W3C, like Berners-Lee and Raggett, about bringing a more comprehensive version of SGML to the web. He was even writing a book called SGML on the Web before his death.

In the hallways of conferences and in threaded mailing lists, Rubinsky used his unique propensity for persuasion to bring people several people together on the subject, including Dan Connolly, Lauren Wood, Jon Bosak, James Clark, Tim Bray, and others. Eventually, those conversations moved into the W3C. They formed a formal working group and, in November of 1996, eXtensible Markup Language (XML) was formally announced, and then adopted as a W3C Recommendation. The announcement took place at an annual SGML conference in Boston, run by an organization where Rubinsky sat on the Board of Directors.

XML is SGML, minus a few things, renamed and repackaged as a web language. That means it goes far beyond the capabilities of HTML, giving developers a way to define their own structured data with completely unique tags (e.g., an <ingredients> tag in a recipe, or an <author> tag in an article). Over the years, XML has become the backbone of widely used technologies, like RSS and MathML, as well as server-level APIs.

XML was appealing to the maintainers of HTML, a language that was beginning to feel somewhat complete. “When we published HTML 4, the group was then basically closed,” Steve Pemberton, chair of the HTML working group at the time, described the situation. “Six months later, though, when XML was up and running, people came up with the idea that maybe there should be an XML version of HTML.” The merging of HTML and XML became known as XHTML. Within a year, it was the W3C’s main focus.

The first iterations of XHTML, drafted in 1998, were not that different from what already existed in the HTML specifications. The only real difference was that it had stricter rules for authors to follow. But that small constraint opened up new possibilities for the future, and XHTML was initially celebrated. The Web Standards Project issued a press release on the day of its release lauding its capabilities, and developers began to make use of the stricter markup rules required, in line with the work Connolly had already done with Document Type Definitions.

XHTML represented a web with deeper meaning. Data would be owned by the web’s creators. And together, computers and programmers, could create a more connected and understandable web. That meaning was labeled semantics. The Semantic Web would become the W3C’s greatest ambition, and they would chase it for close to a decade.

W3C, 2000
W3C, 2000

Subsequent versions of XHTML would introduce even stricter rules, leaning harder into the structure of XML. Released in 2002, the XHTML 2.0 specification became the language’s harbinger. It removed backwards compatibility with older versions of HTML, even as Microsoft’s Internet Explorer — the leading browser by a wide margin at this point — refused to support it. “XHTML 2 was a beautiful specification of philosophical purity that had absolutely no resemblance to the real world,” said Bruce Lawson, an HTML evangelist for Opera at the time.

Rather than uniting standards under a common banner, XHTML, and the refusal of major browsers to fully implement it, threatened the split the web apart permanently. It would take something bold to push web standards in a new direction. But that was still years away.


The post Chapter 7: Standards appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 6: Web Design

Previously in web history…

After the first websites demonstrate the commercial and aesthetic potential of the web, the media industry floods the web with a surge of new content. Amateur webzines — which define and voice and tone unique to the web — are soon joined by traditional publishers. By the mid to late 90’s, most major companies will have a website, and the popularity of the web will begin to explore. Search engines emerge as one solution to cataloging the expanding universe of websites, but even they struggle to keep up. Brands soon begin to look for a way to stand out.

Alec Pollak was little more than a junior art director cranking out print ads when he got a call that would change the path of his career. He worked at advertising agency, Grey Entertainment, later called Grey Group. The agency had spent decades acquiring some of the biggest clients in the industry.

Pollak spent most of his days in the New York office, mocking up designs for magazines and newspapers. Thanks to a knack for computers, a hobby of his, he would get the odd digital assignment or two, working on a multimedia offshoot for an ad campaign. Pollak was on the Internet in the days of BBS. But when he saw the World Wide, the pixels brought to life on his screen by the Mosaic browser, he found a calling.

Sometime in early 1995, he got that phone call. “It was Len Fogge, President of the agency, calling little-old, Junior-Art-Director me,” Pollak would later recall. “He’d heard I was one of the few people in the agency who had an email address.” Fogge was calling because a particularly forward-thinking client (later co-founder of Warner Bros Online Donald Buckley) wanted a website for the upcoming film Batman Forever. The movie’s key demographic — tech-fluent, generally well-to-do comic book aficionados — made it perfect for a web experiment. Fogge was calling Pollak to see if that’s something he could do, build a website. Pollak never had. He knew little about the web other than how to browse it. The offer, however, was too good to pass up. He said, yes, he absolutely could build a website.

Art director Steve McCarron was assigned the project. Pollak had only managed to convince one other employee at Grey of the web’s potential, copywriter Jeffrey Zeldman. McCarron brought the two of them in to work on the site. With little in the way of examples, the trio locked themselves in a room and began to work out what they thought a website should look and feel like. Partnering with a creative team at Grey, and a Perl programmer, they emerged three months later with something cutting edge. The Batman Forever website launched in May of 1995.

The Batman Forever website

When you first came to the site, a moving bat (scripted in Perl by programmer Douglas Rice) flew towards your screen, revealing behind it the website’s home page. It was filled with short, punchy copy and edgy visuals that played on the film’s gothic motifs. The site featured a message board where fans could gather and discuss the film. It had a gallery of videos and images available for download, tiny low-resolution clips and stills from the film. It was packed edge-to-edge with content and easter eggs.

It was hugely successful and influential. At the time, it was visited by just about anyone with a web connection and a browser, Batman fan or not.

Over the next couple of years — a full generation in Internet time — this is how design would work on the web. It would not be a deliberate, top-down process. The web design field would form from blurry edges focused a little at a time. The practice would taken up not by polished professionals but by junior art directors and designers fresh out of college, amateurs with little to lose at the beginning of their careers. In other words, just as outsiders built the web, outsiders would design it.

Interest in the early web required tenacity and personal drive, so it sometimes came from unusual places. Like when Gina Blaber recruited a team inside of O’Reilly nimble and adventurous enough to design GNN from scratch. Or when Walter Isaacson looked for help with Pathfinder and found Chan Suh toiling away at websites deeply embedded in the marketing arm of a Time Warner publication. These weren’t the usual suspects. These were web designers.


Jamie Levy was certainly an outsider, with a massive influence on the practice of design on the web. A product of the San Fernando Valley punk scene, Levy came to New York to attend NYU’s Interactive Telecommunications Program. Even at NYU, a school which had produced some of the most influential artists and filmmakers of the time, Levy stood out. She had a brash attitude and a sharp wit, balanced by an incredible ability to self-motivate and adapt to new technology, and, most importantly, an explosive and immediately recognizable aesthetic.

Levy’s initial resistance to computers as a glorified calculator for shut-ins dropped once she saw what it could do with graphics. After graduating from NYU, Levy brought her experience in the punk scene designing zines — which she had designed, printed and distributed herself — to her multimedia work. One of her first projects was designing a digital magazine called Electric Hollywood using Hypercard, which she loaded and distributed on floppy disks. Levy mixed bold colors and grungy zine-inspired artistry with a clickable, navigable hypertext interface. Years before the web, Levy was building multimedia that felt a lot like what it would become.

Electric Hollywood was enough to cultivate a following. Levy was featured in magazines and in interviews. She also caught the eye of Billy Idol, who recruited her to create graphical interactive liner notes for his latest album, Cyberpunk, distributed with floppys alongside the CD. The album was a critical and commercial failure, but Levy’s reputation among a growing clique of digital designers was cemented.

Still, nothing compared to the first time she saw the web. Levy experienced the World Wide Web — which author Claire Evans describes in her book, Broad Band — “as a conversion.” “Once the browser came out,” Levy would later recall, “I was like, ‘I’m not making fixed-format anymore. I’m learning HTML and that was it.” Levy’s style, which brought the user in to experience her designs on their own terms, was a perfect fit for the web. She began moving her attention to this new medium.

People naturally gravitated towards Levy. She was a fixture in Silicon Alley, the media’s name for the new tech and web scene concentrated in New York City. Within a few years, they would be the ushers of the dot-com boom. In the early ’90’s, they were little more than a scrappy collection of digital designers and programmers and writers; “true believers” in the web, as they called themselves.

Levy was one of their acolytes. She became well known for her Cyber Slacker parties; late-night hangouts where she packed her apartment with a ragtag group of hackers and artists (often with appearances by DJ Spooky). Designers looked to her for inspiration. Many would emulate her work in their own designs. She even had some mainstream appeal. Whenever she graced the covers of major magazines like Esquire and Newsweek, she always had a skateboard or a keyboard in her hands.

It was her near mythic status that brought IT company Icon CMT calling about their new web project, a magazine called Word. The magazine would be Levy’s most ambitious project to date, and where she left her greatest influence on web design. Word would soon become a proving ground for her most impressive design ideas.

Word Magazine

Levy was put in charge of assembling a team. Her first recruit was Marisa Bowe, whom she had met on the Echo messaging board (BBS) run by Stacy Horn, based in New York. Bowe was originally brought on as a managing editor. But when editor in chief Jonathan Van Meter left before the project even got off the ground, Bowe was put in charge of the site’s editorial vision.

Levy found a spiritual partner in Bowe, having come to the web with a similar ethos and passion. Bowe would become a large part of defining the voice and tone that was so integral to the webzine revolution of the ’90’s. She had knack for locating authentic stories, and Word’s content was often, as Bowe called it “first-person memoirs.” People would take stories from their life and relate it to the cultural topics of the day. And Bowe’s writing and editorial style — edgy, sarcastic, and conversational — would be backed by the radical design choices of Levy.

Articles that appeared on Word were one-of a kind, where the images, backgrounds and colors chosen helped facilitate the tone of a piece. These art-directed posts pulled from Levy’s signature style, a blend of 8-bit graphics and off-kilter layouts, with the chaotic bricolage of punk rock zines. Pages came alive, representing through design the personality of the post’s author.

Word also became known for experimenting with new technologies almost as soon as they were released. Browsers were still rudimentary in terms of design possibilities, but they didn’t shy away from stretching those possibilities as far as they could go. It was one of the first magazines to use music, carefully paired with the content of the articles. When Levy first encountered what HTML tables could do to create grid-based layouts, she needed to use it immediately. “Everyone said, ‘Oh my God, this is going to change everything,’” she later recalled in an interview, “And I went back to to Word.com and I’d say, ‘We’ve got to do an artistic piece with tables in it.’ Every week there was some new HTML code to exploit.”

The duo was understandably cocky about their work, and with good reason. It would be years before others would catch up to what they did on Word. “Nobody is doing anything as interesting as Word, I wish someone would try and kick our ass,” Levy once bragged. Bowe echoed the sentiment, describing the rest of the web as “like frosting with no cake.” Still, for a lot of designers, their work would serve as inspiration and a template for what was possible. The whole point was to show off a bit.

Levy’s design was inspired by her work in the print world, but it was something separate and new. When she added some audio to a page, or painted a background with garish colors, she did so to augment its content. The artistry was the point. Things might have been a bit hard to find, a bit confusing, on Word. But that was ok. The joy of the site was discovering its design. Levy left the project before its first anniversary, but the pop art style would continue on the site under new creative director Yoshi Sodeoka. And as the years went on, others would try to capture the same radical spirit.

A couple of years later, Ben Benjamin would step away from his more humdrum work at CNet to create a more personal venture known as Superbad, a mix of offbeat, banal content and charged visuals created a place of exploration. There was no central navigation or anchor to the experience. One could simply click and find what they find next.

The early web also saw its most avant-garde movement in the form of Net.art, a loose community of digital artists pushing their experiments into cyberspace. Net artists exploited digital artifacts to create works of interactive works of art. For instance, Olia Lialina created visual narratives that used hypertext to glue together animated panels and prose. The collective Jodi.org, on the other hand, made a website that looked like complete gibberish, hiding its true content in the source code of the page itself.

These were the extreme examples. But they served in creating a version of the web that felt unrefined. Web work, therefore, was handed to newcomers and subordinates to figure out.

And so the web became defined, by definition, by a class of people that were willing to experiment — basically, it was twenty-somethings fresh out of college, in Silicon Valley, Silicon Alley, and everywhere in between who wrote the very first rules of web design. Some, like Levy and the team at Grey, pulled from their graphic design roots. Others tried something completely new.

There was no canvas, only the blaring white screen of a blank code editor. There was no guide, only bits of data streaming around the world.

But not for long.


In January of 1996, two web design books were published. The first was called Designing for the Web, by Jennifer Robbins, one of the original designers on the GNN team. Robbins had compiled months of notes about web design into a how-to guide for newbies. The second, designing web graphics, was written by Lynda Weinman, by then already owner of the eponymous web tutorial site Lynda.com. Weinman brought her experience in the film industry and with animation to bring a visual language to her practical guide to the web in a fusion of abstract thoughts on a new medium and concrete tips for new designers.

At the time, there were technical manuals and code guides, but few publications truly dedicated to design. Robbins and Weinman provided a much needed foundation.

Six months later, a third book was published, Creating Killer Websites, written by Dave Siegel. It was a very different kind of book. It began with a thesis. The newest generation of websites, what Siegel referred to as third generation sites, needed to guide visitors through their experiences. They needed to be interactive, familiar, and engaging. To achieve this level of interactivity, Siegel argues, required more than what the web platform could provide. What follows from this thesis is a book of programming hacks, ways to use HTML in ways it wasn’t strictly made for. Siegel popularized techniques that would soon become a de facto standard, using HTML tables and spacer GIFs to create advanced layouts, and using images to display heading fonts and visual backgrounds.

The publishing cadence of 1996 makes a good case study for the state and future of web design. The themes and messages of the books illustrate two points very well.

The first is the maturity of web design as a practice. The books published at the beginning of the year drew on predecessors — including Robbins from her time as a print designer, and Lynda from her work in animation — to help contextualize and codify the emerging field of web design. Six months later, that codification was already being expanded and made repeatable by writers like Siegel.

The second point it illustrates is a tension that was beginning to form. In the next few years, designers would begin to hone their craft. The basic layouts and structures of a page would become standardized. New best practices would be catalogued in dozens of new books. Web design would become a more mature practice, an industry all of its own.

But browsers were imperfect and HTML was limited. Coding the intricate designs of Word or Superbad required a bit of creative thinking. Alongside the sophistication of the web design field would follow a string of techniques and tools aimed at correcting browser limitations. These would cause problems later, but in the moment, they gave designers freedom. The history of web design is interweaved with this push and pull between freedom and constraint.


In March of 1995, Netscape introduced a new feature to version 1.1 of Netscape Navigator. It was called server push and it could be used to stream data back and forth between a server and a browser, updated dynamically. Its most common use was thought to be real-time data without refreshes, like a moving stock ticker or an updating news widget. But it could also be used for animation.

On the day that server push was released, there were two websites that used it. The first was the Netscape homepage. The second was a site with a single, animated bouncing blue dot. This produced its name: TheBlueDot.com.

TheBlueDot.com

The animation, and the site, were created by Craig Kanarick, who had worked long into the night the day before Netscape’s update release to have it ready for Day One. Designer Clay Shirky would later describe the first time he saw Kanarick’s animation: “We were sitting around looking at it and were just […] up until that point, in our minds, we had been absolutely cock of the walk. We knew of no one else who was doing design as well as Agency. The Blue Dot came up, and we wanted to hate it, but we looked at it and said, ‘Wow, this is really good.’

Kanarick would soon be elevated from a disciple of Silicon Alley to a dot-com legend. Along with his childhood friend Jeff Dachis, Kanarick created Razorfish, one of the earliest examples of a digital agency. Some of the web’s most influential early designers would begin their careers at Razorfish. As more sites came online, clients would come to Razorfish for fresh takes on design. The agency responded with a distinct style and mindset that permeated through all of their projects.

Jonathan Nelson, on the other hand, had only a vague idea for a nightclub when he moved to San Francisco. Nelson worked with a high school friend, Jonathan Steuer on a way to fuse an online community with a brick and mortar club. They were soon joined by Brian Behlendorf, a recent Berkeley grad with a mailing list of San Francisco rave-goers, and unique experiences for a still very new and untested World Wide Web.

Steuer’s day job was at Wired. He got Nelson and Behlendorf jobs there, working on the digital infrastructure of the magazine, while they worked out their idea for their club. By the time the idea for HotWired began to circulate, Behlendorf had earned himself a promotion. He worked as chief engineer on the project, directly under Steuer.

Nelson was getting restless. The nightclub idea was ill-defined and getting no traction. The web was beginning to pass him by, and he wanted to be part of it. Nelson was joined by his brother and programmer Cliff Skolnick to create an agency of their own. One that would build websites for money. Behlendorf agreed to join as well, splitting his time between HotWired and this new company.

Nelson leased an office one floor above Wired and the newly formed Organic Online began to try and recruit their first clients.

When HotWired eventually launched, it had sold advertising to half a dozen partners. Advertisers were willing to pay a few bucks to have proximity to the brand of cool that HotWired was peddling. None of them, however, had websites. HotWired needed people to build the ads that would be displayed on their site, but they also needed to build the actual websites the ads would link to. For the ads, they used Razorfish. For the brand microsites, they used Organic Online. And suddenly, there were web design experts.


Within the next few years, the practice of web design would go through radical changes. The amateurs and upstarts that had built the web with their fresh perspective and newcomer instincts would soon consolidate into formal enterprises. They created agencies like Organic and Razorfish, but also Agency.com, Modem Media, CKS, Site Specific, and dozens of others. These agencies had little influence on the advertising industry as a whole, at least initially. Even CKS, maybe the most popular agency in Silicon Valley earned what one writer noted, was the equivalent of “in one year what Madison Avenue’s best-known ad slingers collect in just seven days.”

On the other end, the web design community was soon filled by freelancers and smaller agencies. The multi-million dollar dot-com contracts might have gone to the trendy digital agencies, but there were plenty of businesses that needed a website for a lot less.

These needs were met by a cottage industry of designers, developers, web hosts, and strategists. Many of them collected web experience the same way Kanarick and Levy and Nelson and Behlendorf had — on their own and through trial and error. But ad-hoc experimentation could only go so far. It didn’t make sense for each designer to have to re-learn web design. Shortcuts and techniques were shared. Rules were written. And web design trod on more familiar territory.

The Blue Dot launched in 1995. That’s the same year that Word and the Batman Forever sites launched. They were joined that same year by Amazon and eBay, a realization of the commercial potential of the web. By the end of the year, more traditional corporations planted their flag on the web. Websites for Disney and Apple and Coca Cola were followed by hundreds and then thousands of brands and businesses from around the world.

Levy had the freedom to design her pages with an idiosyncratic brush. She used the language of the web to communicate meaning and re-inforce her magazine’s editorial style. New websites, however, had a different value proposition. In most cases, they were there for customers. To sell something, sometimes directly or sometimes indirectly through marketing and advertising. In either case, they needed a website that was clear. Simple. Familiar. To accomodate the needs of business, commerce, and marketing online, the web design industry turned to recognizable techniques.

Starting in 1996, design practice somewhat standardized around common features. The primary elements on a page — the navigation and header — smoothed out from site to site. The stylistic flourishes in layout, color, and use of images from the early web replaced by best practices and common structure. Designers drew on the work of one another and began to create repeatable patterns. The result was a web that, though less visually distinct, was easier to navigate. Like signposts alongside a road, the patterns of the web became familiar to those that used it.

In 1997, a couple of years after the launch of Batman Forever, Jeffrey Zeldman created the mailing list (and later website) A List Apart, to begin circulating web design tutorials and topics. It was just one of a growing list of web designers that rushed to fill the vacuum of knowledge surrounding web design. Web design tutorials blanketed the proto-blogosphere of mailing lists and websites. A near limitless hypertext library of techniques and tips and code examples was available to anyone that looked hard enough for it. Through that blanket distribution of expertise, came new web design methodologies.

Writing a decade after the launch of A List Apart, in 2007, designer Jeffrey Zeldman defined web design as “the creation of digital environments that facilitate and encourage human activity; reflect or adapt to individual voices and content; and change gracefully over time while always retaining their identity.” Zeldman here advocates for merging a familiar interface with brand identity to create predictable, but still stylized, experiences. It’s a shift in thinking from the website as an expression of its creator’s aesthetic, to a utility centered on the user.

This philosophical shift was balanced by a technical one. The two largest browsers, Microsoft and Netscape, vied for market control. They often introduced new capabilities — customizations to colors or backgrounds or fonts or layouts unique to a single browser. That made it hard for designers to create websites that looked the same in both browsers. Designers were forced to resort to fragile code (one could never be too sure if it would work the same the next day), or to turn to tools to smooth out these differences.

Visual editors, Microsoft Frontpage and Macromedia Dreamweaver and a few others, were the first to try and right the ship of design. They gave designers a way to create websites without any code at all. Websites could be built with just the movement of a mouse. In the same way you might use a paintbrush or a drawing tool in Photoshop or MS Paint, one could drag and drop a website into being. The process even got an acronym. WYSIWYG, or “What You See Is What You Get.”

The web, a dynamic medium in its best incarnation, required more frequent updates than designers were sometimes able to do. Writers wanted greater control over the content of their sites, but they were often forced to call the site administrator to make updates. Developers worked out a way to separate the content from how it was output to the screen and store it in a separate database. This led to the development of the first Content Management Systems, or CMS. Using a CMS, an editor or writer could log into a special section of their website, and use simple form fields to update the content of the site. There were even rudimentary WYSIWYG tools baked right in.

Without the CMS, the web would never have been able to keep pace with the blogging revolution or the democratization of publishing that was somewhat borne out in the following decade. But database rendered content and WYSIWYG editors introduced uniformity out of necessity. There were only so many options that could be given to designers. Content in a CMS was inserted into pre-fabricated layouts and templates. Visual editors focused on delivering the most useful and common patterns designers used in their website.


In 1998, PBS Online unveiled a brand new version of its website. At the center of it all was a brand new section, “TeacherSource”: a repository of supplemental materials custom-made for educators to use in their classrooms. In the time since PBS first launched its website three years earlier, they had created a thriving online destination — especially for kids and educators. They had tens of thousands of pages worth of content. Two million visitors streamed through the site each day. They had won at the newly created Webby Awards two years in a row. TeacherSource was simply the latest in a long list of digital-only content that enhanced their other media offerings.

The PBS TeacherSource website

Before they began working on TeacherSource, PBS had run some focus groups with teachers. They wanted to understand where they should put their focus. The teachers were asked about the site’s design and content. They didn’t comment much about the way that images were being used, or their creative use of layouts or the designer’s choice of colors. The number one complaint that PBS heard was that it was hard to find things. The menu was confusing, and there was no place to search.

This latest version of PBS had a renewed design, with special attention given to its navigation. In an announcement about the site’s redesign, Cindy Johanson referred to the design’s more understandable navigation menu and in-site search as a “new front door and lots of side doors.”

It’s a useful metaphor; one that designers would often return to. However, it also doubles as a unique indicator of where web design was headed. The visual design of the page was beginning to recede into the background in favor of clarity and understanding.

The more refined — and predictable — practice of design benefited the most important part of a website: the visitor. The surfing habits of web users were becoming more varied. There were simply more websites to browse. A common language, common designs, helped make it easier for visitors to orient themselves as they bounced from one site to the next. What the web lost in visual flourish it gained back in usability. By the next major change in design, this would go by the name User Experience. But not before one final burst of creative expression.


The second version of MONOcrafts.com, launched in 1998, was a revelation. A muted palette and plain photography belied a deeper construction and design. As you navigated the site, its elements danced on the page, text folding out from the side to reveal more information, pages transitioning smoothly from one to the next. One writer described the site as “orderly and monochromatic, geometric and spare. But present, too, is a strikingly lyrical component.”

The MONOcrafts website

There was the slightest bit of friction to the experience, where the menu would move away from your mouse or you would need to wait for a transition to complete before moving from one page to the next. It was a website that was mediative, precise, and technically complex. A website that for all its splendor, contained little more than a description of its purpose and a brief biography of its creator, Yugo Nakamura.

Nakamura began his career as a civil engineer, after studying civil engineering and architecture at Tokyo University. After working several years in the field, he found himself drawn to the screen. The physical world posed too many limitations. He would later state, “I found the simple fact that every experience was determined by the relationship between me and my surroundings, and I realised that I wanted to design the form of that relationship abstractly. That’s why I got into the web.” Drawing on the influences of notable web artists, Nakamura began to create elaborately designed websites under the moniker yugop, both for hire and as a personal passion.

yugop became famous for his expertise in a tool that gave him the freedom of composition and interactivity that had been denied to him in real-world engineering. A tool called Flash.

Flash had three separate lives before it entered the web design community. It began as software created for the pen computing market, a doomed venture which failed before it even got off the ground. From there, it was adapted to the screen as a drawing tool, and finally transformed, in 1996, into a keyframe animation package known as FutureSplash Animator. The software was paired with a new file format and embeddable player, a quirk of the software that would affirm its later success.

Through a combination of good fortune and careful planning, the FutureSplash player was added to browsers. The software’s creator, Jonathan Gay, first turned to Netscape Navigator, adapting the browser’s new plugin architecture to add widespread support for his file format player. A stroke of luck came when Microsoft’s web portal, MSN, had a need to embed streaming videos on its site, a feature for which the FutureSplash player was well-suited. To make sure it could be viewed by everyone, Microsoft baked the player directly into Internet Explorer. Within the span of a few months, FutureSplash went from just another animation tool to an ubiquitous file format playable in 99% of web browsers. By the end of 1996, Macromedia purchased FutureSplash Animator and rebranded it as Flash.

Flash was an animation tool. De facto support in major browsers made it adaptable enough to be a web design tool as well. Designers learned how to recreate the functionality of websites inside of Flash. Rather than relegating a Flash player to tiny corner of a webpage, some practitioners expanded the player to fill the whole screen, creating the very first Flash websites. By the end of 1996, Flash had captivated the web design community. Resources and techniques sprung up to meet the demand. Designers new to the web were met with tutorials and guides on how to build their websites in Flash.

The appeal to designers was its visual interface, drag and drop drawing tools that could be used to create animated navigation, transitions and audiovisual interactivity the web couldn’t support natively. Web design practitioners had been looking for that level of precision and control since HTML tables were introduced. Flash made it not only possible but, compared to HTML, nearly effortless. Using your mouse and your imagination — and very little, if any, code — could lead to sophisticated designs.

Even among the saturation that the new Flash community would soon become, MONOcrafts stood out. It’s use of Flash was playful, but with a definitive structure and flow.

Flash 4 had been released just before Nakamura began working on his site. It included a new scripting language known as ActionScript, which gave designers a way to programmatically add new interactive elements to the page. Nakamura used ActionScript, combined with the other capabilities of Flash, to create elements that would soon be seen on every website (and now feel like ancient relics of a forgotten past).

MONOcrafts was the first time that many web designers saw an animated intro bring them into the site. In the hands of yugop and other Flash experts, it was an elegant (and importantly, brief) introduction to the style and tone of a website. Before long, intros would become interminable, pervasive, and bothersome. So much so, designers would frequently add a “Skip Intro” button to the bottom of their sites. Clicking that button as soon as it appeared became almost a reflex for browsers of the mid-90’s, Flash-dominated web.

Nakamura also made sophisticated use of audio, something possible with ActionScript. Digitally compressed tones and clicks gave the site a natural feel, bringing the users directly into the experience. Before long, sounds would be everywhere, music playing in the background wherever you went. After that, audio elements would become an all but extinct design practice.

And MONOcrafts used transitions, animations, and navigation that truly made it shine. Nakamura, and other Flash experts, created new approaches to transitions and animations, carefully handled and deliberately placed, that would be retooled by designers in thousands of incarnations.

Designers turned to Flash, in part, because they had no other choice. They were the collateral damage of the so-called “Browser Wars” being played out by Netscape and Microsoft. Inconsistent implementations of web technologies like HTML and CSS made them difficult tools to rely on. Flash offered consistency.

This was met by a rise in the need for web clients. Companies with commercial or marketing needs wanted a way to stand out. In the era of Flash design, even e-commerce shopping carts zoomed across the page, and were animated as if in a video game. But the (sometimes excessive) embellishment was the point. There were many designers that felt they were being boxed in by the new rules of design. The outsiders who created the field of web design had graduated to senior positions at the agencies that they had often founded. Some left the industry altogether. They were replaced by a new freshman class as eager to define a new medium as the last. Many of these designers turned to Flash as their creative outlet.

The results were punchy designs applied to the largest brands. “In contrast to the web’s modern, business-like aesthetic, there is something bizarre, almost sentimental, about billion-dollar multinationals producing websites in line with Flash’s worst excess: long loading times, gaudy cartoonish graphics, intrusive sound and incomprehensible purpose,” notes writer Will Bedingfield. For some, Flash design represented summit of possibility for the web, its full potential realized. For others, it was a gaudy nuisance. It’s influence, however, is unquestionable.

Following the rise of Flash in the late 90’s and early 2000’s, the web would see a reset of sorts, one that came back to the foundational web technologies that it began with.


In April of 2000, as a new millennium was solidifying the stakes of the information age, John Allsopp wrote a post for A List Apart entitled “A Dao of Web Design.” It was written at the end of the first era of web design, and at the beginning of a new transformation of the web from a stylistic artifact of its print predecessors to a truly distinct design medium. “What I sense is a real tension between the web as we know it, and the web as it would be. It’s the tension between an existing medium, the printed page, and its child, the web,” Allsopp wrote. “And it’s time to really understand the relationship between the parent and the child, and to let the child go its own way in the world.”

In the post, Allsopp uses the work of Daoism to sketch out ideas around a fluid and flexible web. Designers, for too long, had attempted to assert control over the web medium. It is why they turned to HTML hacks, and later, to Flash. But the web’s fluidity is also its strength, and when embraced, opens up the possibilities for new designs.

Allsopp dedicates the second half of the post to outline several techniques that can aid designers in embracing this new medium. In so doing, he set the stage for concepts that would be essential to web design over the next decade. He talks about accessibility, web standards, and the separation of content and appearance. Five years before the article was written, those concepts were whispered by a barely known community. Ten years earlier, they didn’t even exist. It’s a great illustration of just how far things had come in such a short time.

Allsopp puts a fine point on the struggle and tension that existed on the web for the past decade as he looked to the future. From this tension, however, came a new practice entirely. The practice of web design.


The post Chapter 6: Web Design appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 4: Search

Previously in web history…

After an influx of rapid browser development following the creation of the web, Mosaic becomes the popular choice. Recognizing the commercial potential of the web, a team at O’Reilly builds GNN, the first commercial website. With something to browse with, and something to browse for, more and more people begin to turn to the web. Many create small, personal sites of their own. The best the web has to offer becomes almost impossible to find.

eBay had had enough of these spiders. They were fending them off by the thousands. Their servers buzzed with nonstop activity; a relentless stream of trespassers. One aggressor, however, towered above the rest. Bidder’s Edge, which billed itself as an auction aggregator, would routinely crawl the pages of eBay to extract its content and list it on its own site alongside other auction listings.

The famed auction site had unsuccessfully tried blocking Bidder’s Edge in the past. Like an elaborate game of Whac-A-Mole, they would restrict the IP address of a Bidder’s Edge server, only to be breached once again by a proxy server with a new one. Technology had failed. Litigation was next.

eBay filed suit against Bidder’s Edge in December of 1999, citing a handful of causes. That included “an ancient trespass theory known to legal scholars as trespass to chattels, basically a trespass or interference with real property — objects, animals, or, in this case, servers.” eBay, in other words, was arguing that Bidder’s Edge was trespassing — in the most medieval sense of that word — on their servers. In order for it to constitute trespass to chattels, eBay had to prove that the trespassers were causing harm. That their servers were buckling under the load, they argued, was evidence of that harm.

eBay in 1999

Judge Ronald M. Whyte found that last bit compelling. Quite a bit of back and forth followed, in one of the strangest lawsuits of a new era that included the phrase “rude robots” entering the official court record. These robots — as opposed to the “polite” ones — ignored eBay’s requests to block spidering on their sites, and made every attempt to circumvent counter measures. They were, by the judge’s estimation, trespassing. Whyte granted an injunction to stop Bidder’s Edge from crawling eBay until it was all sorted out.

Several appeals and countersuits and counter-appeals later, the matter was settled. Bidder’s Edge paid eBay an undisclosed amount and promptly shut their doors. eBay had won this particular battle. They had gotten rid of the robots. But the actual war was already lost. The robots — rude or otherwise — were already here.


If not for Stanford University, web search may have been lost. It is the birthplace of Yahoo!, Google and Excite. It ran the servers that ran the code that ran the first search engines. The founders of both Yahoo! and Google are alumni. But many of the most prominent players in search were not in the computer science department. They were in the symbolic systems program.

Symbolic systems was created at Stanford in 1985 as a study of the “relationship between natural and artificial systems that represent, process, and act on information.” Its interdisciplinary approach is rooted at the intersection of several fields: linguistics, mathematics, semiotics, psychology, philosophy, and computer science.

These are the same fields of study one would find at the heart of artificial intelligence research in the second half of the 20ᵗʰ century. But this isn’t the A.I. in its modern smart home manifestation, but in the more classical notion conceived by computer scientists as a roadmap to the future of computing technology. It is the understanding of machines as a way to augment the human mind. That parallel is not by accident. One of the most important areas of study at the symbolics systems program is artificial intelligence.

Numbered among the alumni of the program are several of the founders of Excite and Srinija Srinivasan, the fourth employee at Yahoo!. Her work in artificial intelligence led to a position at the ambitious A.I. research lab Cyc right out of college.

Marisa Mayer, an early employee at Google and, later, Yahoo!’s CEO, also drew on A.I. research during her time in the symbolic systems program. Her groundbreaking thesis project used natural language processing to help its users find the best flights through a simple conversation with a computer. “You look at how people learn, how people reason, and ask a computer to do the same things. It’s like studying the brain without the gore,” she would later say of the program.

Marissa Mayer in 1999

Search on the web stems from this one program at one institution at one brief moment in time. Not everyone involved in search engines studied that program — the founders of both Yahoo! and Google, for instance, were graduate students of computer science. But the ideology of search is deeply rooted in the tradition of artificial intelligence. The goal of search, after all, is to extract from the brain a question, and use machines to provide a suitable answer.

At Yahoo!, the principles of artificial intelligence acted as a guide, but it would be aided by human perspective. Web crawlers, like Excite, would bear the burden of users’ queries and attempt to map websites programmatically to provide intelligent results.

However, it would be at Google that A.I. would become an explicitly stated goal. Steven Levy, who wrote the authoritative book on the history of Google, In the Plex, describes Google as a “vehicle to realize the dream of artificial intelligence in augmenting humanity.” Founders Larry Page and Sergey Brin would mention A.I. constantly. They even brought it up in their first press conference.

The difference would be a matter of approach. A tension that would come to dominate search for half a decade. The directory versus the crawler. The precision of human influence versus the completeness of machines. Surfers would be on one side and, on the other, spiders. Only one would survive.


The first spiders were crude. They felt around in the dark until they found the edge of the web. Then they returned home. Sometimes they gathered little bits of information about the websites they crawled. In the beginning, they gathered nothing at all.

One of the earliest web crawlers was developed at MIT by Matthew Gray. He used his World Wide Wanderer to go and find every website on the web. He wasn’t interested in the content of those sites, he merely wanted to count them up. In the summer of 1993, the first time he sent his crawler out, it got to 130. A year later, it would count 3,000. By 1995, that number grew to just shy of 30,000.

Like many of his peers in the search engine business, Gray was a disciple of information retrieval, a subset of computer science dedicated to knowledge sharing. In practice, information retrieval often involves a robot (also known as “spiders, crawlers, wanderers, and worms”) that crawls through digital documents and programmatically collects their contents. They are then parsed and stored in a centralized “index,” a shortcut that eliminates the need to go and crawl every document each time a search is made. Keeping that index up to date is a constant struggle, and robots need to be vigilant; going back out and re-crawling information on a near constant basis.

The World Wide Web posed a problematic puzzle. Rather than a predictable set of documents, a theoretically infinite number of websites could live on the web. These needed to be stored in a central index —which would somehow be kept up to date. And most importantly, the content of those sites needed to be connected to whatever somebody wanted to search, on the fly and in seconds. The challenge proved irresistible for some information retrieval researchers and academics. People like Jonathan Fletcher.

Fletcher, a former graduate and IT employee at the University of Stirling in Scotland, didn’t like how hard it was to find websites. At the time, people relied on manual lists, like the WWW Virtual Library maintained at CERN, or Mosaic’s list of “What’s New” that they updated daily. Fletcher wanted to handle it differently. “With a degree in computing science and an idea that there had to be a better way, I decided to write something that would go and look for me.”

He built Jumpstation in 1993, one of the earliest examples of a searchable index. His crawler would go out, following as many links as it could, and bring them back to a searchable, centralized database. Then it would start over. To solve for the issue of the web’s limitless vastness, Fletcher began by crawling only the titles and some metadata from each webpage. That kept his index relatively small, but but it also restricted search to the titles of pages.

Fletcher was not alone. After tinkering for several months, WebCrawler launched in April of 1994 out of the University of Washington. It holds the distinction of being the first search engine to crawl entire webpages and make them searchable. By November of that year, WebCrawler had served 1 million queries. At Carnegie Mellon, Michael Maudlin released his own spider-based search engine variant named for the Latin translation of wolf spider, Lycos. By 1995, it had indexed over a million webpages.

Search didn’t stay in universities long. Search engines had a unique utility for wayward web users on the hunt for the perfect site. Many users started their web sessions on a search engine. Netscape Navigator — the number one browser for new web users — connected users directly to search engines on their homepage. Getting listed by Netscape meant eyeballs. And eyeballs meant lucrative advertising deals.

In the second half of the 1990’s, a number of major players entered the search engine market. InfoSeek, initially a paid search option, was picked up by Disney, and soon became the default search engine for Netscape. AOL swooped in and purchased WebCrawler as part of a bold strategy to remain competitive on the web. Lycos was purchased by a venture capitalist who transformed it into a fully commercial enterprise.

Excite.com, another crawler started by Stanford alumni and a rising star in the search engine game for its depth and accuracy of results, was offered three million dollars not long after they launched. Its six co-founders lined up two couches, one across from another, and talked it out all night. They decided to stick with the product and bring in a new CEO. There would be many more millions to be made.

Excite in 1996

AltaVista, already a bit late to the game at the end of 1995, was created by the Digital Equipment Corporation. It was initially built to demonstrate the processing power of DEC computers. They quickly realized that their multithreaded crawler was able to index websites at a far quicker rate than their competitors. AltaVista would routinely deploy its crawlers — what one researcher referred to as a “brood of spiders” — to index thousands of sites at a time.

As a result, AltaVista was able to index virtually the entire web, nearly 10 million webpages at launch. By the following year, in 1996, they’d be indexing over 100 million. Because of the efficiency and performance of their machines, AltaVista was able to solve the scalability problem. Unlike some of their predecessors, they were able to make the full content of websites searchable, and they re-crawled sites every few weeks, a much more rapid pace than early competitors, who could take months to update their index. They set the standard for the depth and scope of web crawlers.

AltaVista in 1996

Never fully at rest, AltaVista used its search engine as a tool for innovation, experimenting with natural language processing, translation tools, and multi-lingual search. They were often ahead of their time, offering video and image search years before that would come to be an expected feature.

Those spiders that had not been swept up in the fervor couldn’t keep up. The universities hosting the first search engines were not at all pleased to see their internet connections bloated with traffic that wasn’t even related to the university. Most universities forced the first experimental search engines, like Jumpstation, to shut down. Except, that is, at Stanford.


Stanford’s history with technological innovation begins in the second half of the 20th century. The university was, at that point, teetering on the edge of becoming a second-tier institution. They had been losing ground and lucrative contracts to their competitors on the East Coast. Harvard and MIT became the sites of a groundswell of research in the wake of World War II. Stanford was being left behind.

In 1951, in a bid to reverse course on their downward trajectory, Dean of Engineering Frederick Terman brokered a deal with the city of Palo Alto. Stanford University agreed to annex 700 acres of land for a new industrial park that upstart companies in California could use. Stanford would get proximity to energetic innovation. The businesses that chose to move there would gain unique access to the Stanford student body for use on their product development. And the city of Palo Alto would get an influx of new taxes.

Hewlett-Packard was one of the first companies to move in. They ushered in a new era of computing-focused industry that would soon be known as Silicon Valley. The Stanford Research Park (later renamed Stanford Industrial Park) would eventually host Xerox during a time of rapid success and experimentation. Facebook would spend their nascent years there, growing into the behemoth it would become. At the center of it all was Stanford.

The research park transformed the university from one of stagnation to a site of entrepreneurship and cutting-edge technology. It put them at the heart of the tech industry. Stanford would embed itself — both logistically and financially — in the crucial technological developments of the second half of the 20ᵗʰ century, including the internet and the World Wide Web.

The potential success of Yahoo!, therefore, did not go unnoticed.


Jerry Yang and David Filo were not supposed to be working on Yahoo!. They were, however, supposed to be working together. They had met years ago, when David was Jerry’s teaching assistant in the Stanford computer science program. Yang eventually joined Filo as a graduate student and — after building a strong rapport — they soon found themselves working on a project together.

As they crammed themselves into a university trailer to begin working through their doctoral project, their relationship become what Yang has often described as perfectly balanced. “We’re both extremely tolerant of each other, but extremely critical of everything else. We’re both extremely stubborn, but very unstubborn when it comes to just understanding where we need to go. We give each other the space we need, but also help each other when we need it.”

In 1994, Filo showed Yang the web. In just a single moment, their focus shifted. They pushed their intended computer science thesis to the side, procrastinating on it by immersing themselves into the depths of the World Wide Web. Days turned into weeks which turned into months of surfing the web and trading links. The two eventually decided to combine their lists in a single place, a website hosted on their Stanford internet connection. It was called Jerry and David’s Guide to the World Wide Web, launched first to Stanford students in 1993 and then to the world in January of 1994. As catchy as that name wasn’t, the idea (and traffic) took off as friends shared with other friends.

Jerry and David’s Guide was a directory. Like the virtual library started at CERN, Yang and Filo organized websites into various categories that they made up on the fly. Some of these categories had strange or salacious names. Others were exactly what you might expect. When one category got too big, they split it apart. It was ad-hoc and clumsy, but not without charm. Through their classifications, Yang and Filo had given their site a personality. Their personality. In later years, Yang would commonly refer to this as the “voice of Yahoo!”

That voice became a guide — as the site’s original name suggested — for new users of the web. Their web crawling competitors were far more adept at the art of indexing millions of sites at a time. Yang and Filo’s site featured only a small subset of the web. But it was, at least by their estimation, the best of what the web had to offer. It was the cool web. It was also a web far easier to navigate than ever before.

Jerry Yang (left) and David Filo (right) in 1995 (Yahoo, via Flickr)

At the end of 1994, Yang and Filo renamed their site to Yahoo! (an awkward forced acronym for Yet Another Hierarchical Officious Oracle). By then, they were getting almost a hundred thousand hits a day, sometimes temporarily taking down Stanford’s internet in the process. Most other universities would have closed down the site and told them to get back to work. But not Stanford. Stanford had spent decades preparing for on-campus businesses just like this one. They kept the server running, and encouraged its creators to stake their own path in Silicon Valley.

Throughout 1994, Netscape had included Yahoo! in their browser. There was a button in the toolbar labeled “Net Directory” that linked directly to Yahoo!. Marc Andreessen, believing in the site’s future, agreed to host their website on Netscape’s servers until they were able to get on steady ground.

Yahoo! homepage in Netscape Navigator, circa 1994

Yang and Filo rolled up their sleeves, and began talking to investors. It wouldn’t take long. By the spring of 1996, they would have a new CEO and hold their own record-setting IPO, outstripping even their gracious host, Netscape. By then, they became the most popular destination on the web by a wide margin.

In the meantime, the web had grown far beyond the grasp of two friends swapping links. They had managed to categorize tens of thousands of sites, but there were hundreds of thousands more to crawl. “I picture Jerry Yang as Charlie Chaplin in Modern Times,” one journalist described, “confronted with an endless stream of new work that is only increasing in speed.” The task of organizing sites would have to go to somebody else. Yang and Filo found help in a fellow Stanford alumni, someone they had met years ago while studying abroad together in Japan, Srinija Srinivasan, a graduate of the symbolic systems program. Many of the earliest hires at Yahoo! were given slightly absurd titles that always ended in “Yahoo.” Yang and Filo went by Chief Yahoos. Srinivasan’s job title was Ontological Yahoo.

That is a deliberate and precise job title, and it was not selected by accident. Ontology is the study of being, an attempt to break the world into its component parts. It has manifested in many traditions throughout history and the world, but it is most closely associated with the followers of Socrates, in the work of Plato, and later in the groundbreaking text Metaphysics, written by Aristotle. Ontology asks the question “What exists?”and uses it as a thought experiment to construct an ideology of being and essence.

As computers blinked into existence, ontology found a new meaning in the emerging field of artificial intelligence. It was adapted to fit the more formal hierarchical categorizations required for a machine to see the world; to think about the world. Ontology became a fundamental way to describe the way intelligent machines break things down into categories and share knowledge.

The dueling definitions of the ontology of metaphysics and computer science would have been familiar to Srinija Srinivasan from her time at Stanford. The combination of philosophy and artificial intelligence in her studies gave her a unique perspective on hierarchical classifications. It was this experience that she brought to her first job after college at the Cyc Project, an artificial intelligence research lab with a bold project: to teach a computer common sense.

Srinija Srinivasan (Getty Images/James D. Wilson)

At Yahoo!, her task was no less bold. When someone looked for something on the site, they didn’t want back a random list of relevant results. They wanted the result they were actually thinking about, but didn’t quite know how to describe. Yahoo! had to — in a manner of seconds — figure out what its users really wanted. Much like her work in artificial intelligence, Srinivasan needed to teach Yahoo! how to think about a query and infer the right results.

To do that, she would need to expand the voice of Yahoo! to thousands of more websites in dozens of categories and sub-categories without losing the point of view established by Jerry and David. She would need to scale that perspective. “This is not a perfunctory file-keeping exercise. This is defining the nature of being,” she once said of her project. “Categories and classifications are the basis for each of our worldviews.”

At a steady pace, she mapped an ontology of human experience onto the site. She began breaking up the makeshift categories she inherited from the site’s creators, re-constituting them into more concrete and findable indexes. She created new categories and destroyed old ones. She sub-divided existing subjects into new, more precise ones. She began cross-linking results so that they could live within multiple categories. Within a few months she had overhauled the site with a fresh hierarchy.

That hierarchical ontology, however, was merely a guideline. The strength of Yahoo!’s expansion lay in the 50 or so content managers she had hired in the meantime. They were known as surfers. Their job was to surf the web — and organize it.

Each surfer was coached in the methodology of Yahoo! but were left with a surprising amount of editorial freedom. They cultivated the directory with their own interests, meticulously deliberating over websites and where they belong. Each decision could be strenuous, and there were missteps and incorrectly categorized items along the way. But by allowing individual personality to dictate hierarchal choices, Yahoo! retained its voice.

They gathered as many sites as they could, adding hundreds each day. Yahoo! surfers did not reveal everything on the web to their site’s visitors. They showed them what was cool. And that meant everything to users grasping for the very first time what the web could do.


At the end of 1995, the Yahoo! staff was watching their traffic closely. Huddled around consoles, employees would check their logs again and again, looking for a drop in visitors. Yahoo! had been the destination for the “Internet Directory” button on Netscape for years. It had been the source of their growth and traffic. Netscape had made the decision, at the last minute (and seemingly at random), to drop Yahoo!, replacing them with the new kids on the block, Excite.com. Best case scenario: a manageable drop. Worst case: the demise of Yahoo!.

But the drop never came. A day went by, and then another. And then a week. And then a few weeks. And Yahoo! remained the most popular website. Tim Brady, one of Yahoo!’s first employees, describes the moment with earnest surprise. “It was like the floor was pulled out in a matter of two days, and we were still standing. We were looking around, waiting for things to collapse in a lot of ways. And we were just like, I guess we’re on our own now.”

Netscape wouldn’t keep their directory button exclusive for long. By 1996, they would begin allowing other search engines to be listed on their browser’s “search” feature. A user could click a button and a drop-down of options would appear, for a fee. Yahoo! bought themselves back in to the drop-down. They were joined by four other search engines, Lycos, InfoSeek, Excite, and AltaVista.

By that time, Yahoo! was the unrivaled leader. It had transformed its first mover advantage into a new strategy, one bolstered by a successful IPO and an influx of new investment. Yahoo! wanted to be much more than a simple search engine. Their site’s transformation would eventually be called a portal. It was a central location for every possible need on the web. Through a number of product expansions and aggressive acquisitions, Yahoo! released a new suite of branded digital products. Need to send an email? Try Yahoo! Mail. Looking to create website? There’s Yahoo! Geocities. Want to track your schedule? Use Yahoo! Calendar. And on and on the list went.

Yahoo! in 1996

Competitors rushed the fill the vacuum of the #2 slot. In April of 1996, Yahoo!, Lycos and Excite all went public to soaring stock prices. Infoseek had their initial offering only a few months later. Big deals collided with bold blueprints for the future. Excite began positioning itself as a more vibrant alternative to Yahoo! with more accurate search results from a larger slice of the web. Lycos, meanwhile, all but abounded the search engine that had brought them initial success to chase after the portal-based game plan that had been a windfall for Yahoo!.

The media dubbed the competition the “portal wars,” a fleeting moment in web history when millions of dollars poured into a single strategy. To be the biggest, best, centralized portal for web surfers. Any service that offered users a destination on the web was thrown into the arena. Nothing short of the future of the web (and a billion dollar advertising industry) was at stake.

In some ways, though, the portal wars were over before they started. When Excite announced a gigantic merger with @Home, an Internet Service Provider, to combine their services, not everyone thought it was a wise move. “AOL and Yahoo! were already in the lead,” one investor and cable industry veteran noted, “and there was no room for a number three portal.” AOL had just enough muscle and influence to elbow their way into the #2 slot, nipping at the heels of Yahoo!. Everyone else would have to go toe-to-toe with Goliath. None were ever able to pull it off.

Battling their way to market dominance, most search engines had simply lost track of search. Buried somewhere next to your email and stock ticker and sports feed was, in most cases, a second rate search engine you could use to find things — only not often and not well. That’s is why it was so refreshing when another search engine out of Stanford launched with just a single search box and two buttons, its bright and multicolored logo plastered across the top.


A few short years after it launched, Google was on the shortlist of most popular sites. In an interview with PBS Newshour in 2002, co-founder Larry Page described their long-term vision. “And, actually, the ultimate search engine, which would understand, you know, exactly what you wanted when you typed in a query, and it would give you the exact right thing back, in computer science we call that artificial intelligence.”

Google could have started anywhere. It could have started with anything. One employee recalls an early conversation with the site’s founders where he was told “we are not really interested in search. We are making an A.I.” Larry Page and Sergey Brin, the creators of Google, were not trying to create the web’s greatest search engine. They were trying to create the web’s most intelligent website. Search was only their most logical starting point.

Imprecise and clumsy, the spider-based search engines of 1996 faced an uphill battle. AltaVista had proved that the entirety of the web, tens of millions of webpages, could be indexed. But unless you knew your way around a few boolean logic commands, it was hard to get the computer to return the right results. The robots were not yet ready to infer, in Page’s words, “exactly what you wanted.”

Yahoo! had filled in these cracks of technology with their surfers. The surfers were able to course-correct the computers, designing their directory piece by piece rather than relying on an algorithm. Yahoo! became an arbiter of a certain kind of online chic; tastemakers reimagined for the information age. The surfers of Yahoo! set trends that would last for years. Your site would live or die by their hand. Machines couldn’t do that work on their own. If you wanted your machines to be intelligent, you needed people to guide them.

Page and Brin disagreed. They believed that computers could handle the problem just fine. And they aimed to prove it.

That unflappable confidence would come to define Google far more than their “don’t be evil” motto. In the beginning, their laser-focus on designing a different future for the web would leave them blind to the day-to-day grind of the present. On not one, but two occasions, checks made out to the company for hundreds of thousands of dollars were left in desk drawers or car trunks until somebody finally made the time to deposit them. And they often did things different. Google’s offices, for instances, were built to simulate a college dorm, an environment the founders felt most conducive to big ideas.

Google would eventually build a literal empire on top of a sophisticated, world-class infrastructure of their own design, fueled by the most elaborate and complex (and arguably invasive) advertising mechanism ever built. There are few companies that loom as large as Google. This one, like others, started at Stanford.


Even among the most renowned artificial intelligence experts, Terry Winograd, a computer scientist and Stanford professor, stands out in the crowd. He was also Larry Page’s advisor and mentor when he was a graduate student in the computer science department. Winograd has often recalled the unorthodox and unique proposals he would receive from Page for his thesis project, some of which involved “space tethers or solar kites.” “It was science fiction more than computer science,” he would later remark.

But for all of his fanciful flights of imagination, Page always returned to the World Wide Web. He found its hyperlink structure mesmerizing. Its one-way links — a crucial ingredient in the web’s success — had led to a colossal proliferation of new websites. In 1996, when Page first began looking at the web, there were tens of thousands of sites being added every week. The master stroke of the web was to enable links that only traveled in one direction. That allowed the web to be decentralized, but without a central database tracking links, it was nearly impossible to collect a list of all of the sites that linked to a particular webpage. Page wanted to build a graph of who was linking to who; an index he could use to cross-reference related websites.

Page understood that the hyperlink was a digital analog to academic citations. A key indicator of the value of a particular academic paper is the amount of times it has been cited. If a paper is cited often (by other high quality papers), it is easier to vouch for its reliability. The web works the same way. The more often your site is linked to (what’s known as a backlink), the more dependable and accurate it is likely to be.

Theoretically, you can determine the value of a website by adding up all of the other websites that link to it. That’s only one layer though. If 100 sites link back to you, but each of them has only ever been linked to one time, that’s far less valuable than if five sites that each have been linked to 100 times link back to you. So it’s not simply how many links you have, but the quality of those links. If you take both of those dimensions and aggregate sites using backlinks as a criteria, you can very quickly start to assemble a list of sites ordered by quality.

John Battelle describes the technical challenge facing Page in his own retelling of the Google story, The Search.

Page realized that a raw count of links to a page would be a useful guide to that page’s rank. He also saw that each link needed its own ranking, based on the link count of its originating page. But such an approach creates a difficult and recursive mathematical challenge — you not only have to count a particular page’s links, you also have to count the links attached to the links. The math gets complicated rather quickly.

Fortunately, Page already knew a math prodigy. Sergey Brin had proven his brilliance to the world a number of times before he began a doctoral program in the Stanford computer science department. Brin and Page had crossed paths on several occasions, a relationship that began on rocky ground but grew towards mutual respect. The mathematical puzzle at the center of Page’s idea was far too enticing for Brin to pass up.

He got to work on a solution. “Basically we convert the entire Web into a big equation, with several hundred million variables,” he would later explain, “which are the page ranks of all the Web pages, and billions of terms, which are the links. And we’re able to solve that equation.” Scott Hassan, the seldom talked about third co-founder of Google who developed their first web crawler, summed it up a bit more concisely, describing Google’s algorithm as an attempt to “surf the web backward!”

The result was PageRank — as in Larry Page, not webpage. Brin, Page, and Hassan developed an algorithm that could trace backlinks of a site to determine the quality of a particular webpage. The higher value of a site’s backlinks, the higher up the rankings it climbed. They had discovered what so many others had missed. If you trained a machine on the right source — backlinks — you could get remarkable results.

It was only after that that they began matching their rankings to search queries when they realized PageRank fit best in a search engine. They called their search engine Google. It was launched on Stanford’s internet connection in August of 1996.

Google in 1998

Google solved the relevancy problem that had plagued online search since its earliest days. Crawlers like Lycos, AltaVista and Excite were able to provide a list of webpages that matched a particular search. They just weren’t able to sort them right, so you had to go digging to find the result you wanted. Google’s rankings were immediately relevant. The first page of your search usually had what you needed. They were so confident in their results they added an “I’m Feeling Lucky” button which took users directly to the first result for their search.

Google’s growth in their early days was not unlike Yahoo!’s in theirs. They spread through word of mouth, from friends to friends of friends. By 1997, they had grown big enough to put a strain on the Stanford network, something Yang and Filo had done only a couple of years earlier. Stanford once again recognized possibility. It did not push Google off their servers. Instead, Stanford’s advisors pushed Page and Brin in a commercial direction.

Initially, the founders sought to sell or license their algorithm to other search engines. They took meetings with Yahoo!, Infoseek and Excite. No one could see the value. They were focused on portals. In a move that would soon sound absurd, they each passed up the opportunity to buy Google for a million dollars or less, and Page and Brin could not find a partner that recognized their vision.

One Stanford faculty member was able to connect them with a few investors, including Jeff Bezos and David Cheriton (which got them those first few checks that sat in a desk drawer for weeks). They formally incorporated in September of 1998, moving into a friend’s garage, bringing a few early employees along, including symbolics systems alumni Marissa Mayer.

Larry Page (left) and Sergey Brin (right) started Google in a friend’s garage.

Even backed by a million dollar investment, the Google founders maintained a philosophy of frugality, simplicity, and swiftness. Despite occasional urging from their investors, they resisted the portal strategy and remained focused on search. They continued tweaking their algorithm and working on the accuracy of their results. They focused on their machines. They wanted to take the words that someone searched for and turn them into something actually meaningful. If you weren’t able to find the thing you were looking for in the top three results, Google had failed.

Google was followed by a cloud of hype and positive buzz in the press. Writing in Newsweek, Steven Levy described Google as a “high-tech version of the Oracle of Delphi, positioning everyone a mouse click away from the answers to the most arcane questions — and delivering simple answers so efficiently that the process becomes addictive.” It was around this time that “googling” — a verb form of the site synonymous with search — entered the common vernacular. The portal wars were still waging, but Google was poking its head up as a calm, precise alternative to the noise.

At the end of 1998, they were serving up ten thousand searches a day. A year later, that would jump to seven million a day. But quietly, behind the scenes, they began assembling the pieces of an empire.

As the web grew, technologists and journalists predicted the end of Google; they would never be able to keep up. But they did, outlasting a dying roster of competitors. In 2001, Excite went bankrupt, Lycos closed down, and Disney suspended Infoseek. Google climbed up and replaced them. It wouldn’t be until 2006 that Google would finally overtake Yahoo! as the number one website. But by then, the company would transform into something else entirely.

After securing another round of investment in 1999, Google moved into their new headquarters and brought on an army of new employees. The list of fresh recruits included former engineers at AltaVista, and leading artificial intelligence expert Peter Norving. Google put an unprecedented focus on advancements in technology. Better servers. Faster spiders. Bigger indexes. The engineers inside Google invented a web infrastructure that had, up to that point, been only theoretical.

They trained their machines on new things, and new products. But regardless of the application, translation or email or pay-per-click advertising, they rested on the same premise. Machines can augment and re-imagine human intelligence, and they can do it at limitless scale. Google took the value proposition of artificial intelligence and brought it into the mainstream.

In 2001, Page and Brin brought in Silicon Valley veteran Eric Schmidt to run things as their CEO, a role he would occupy for a decade. He would oversee the company during its time of greatest growth and innovation. Google employee #4 Heather Cairns recalls his first days on the job. “He did this sort of public address with the company and he said, ‘I want you to know who your real competition is.’ He said, ‘It’s Microsoft.’ And everyone went, What?

Bill Gates would later say, “In the search engine business, Google blew away the early innovators, just blew them away.” There would come a time when Google and Microsoft would come face to face. Eric Schmidt was correct about where Google was going. But it would take years for Microsoft to recognize Google as a threat. In the second half of the 1990’s, they were too busy looking in their rearview mirror at another Silicon Valley company upstart that had swept the digital world. Microsoft’s coming war with Netscape would subsume the web for over half a decade.


The post Chapter 4: Search appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 2: Browsers

Previously in web history…

Sir Tim Berners-Lee creates the technologies behind the web — HTML, HTTP, and the URL which blend hypertext with the Internet — with a small team at CERN. He convinces the higher-ups in the organizations to put the web in the public domain so anyone can use it.

Dennis Ritchie had a problem.

He was working on a new, world class operating system. He and a few other colleagues were building it from the ground up to be simple and clean and versatile. It needed to run anywhere and it needed to be fast.

Ritchie worked at Bell Labs. A hotbed of innovation, in the 60s, and 70s, Bell employed some of the greatest minds in telecommunications. While there, Ritchie had worked on a time-sharing project known as Multics. He was fiercely passionate about what he saw as the future of computing. Still, after years of development and little to show for it, Bell eventually dropped the project. But Ritchie and a few of his colleagues refused to let the dream go. They transformed Multics into a new operating system adaptable and extendable enough to be used for networked time sharing. They called it Unix.

Ritchie’s problem was with Unix’s software. More precisely, his problem was with the language the software ran on. He had been writing most of Unix in assembly code, quite literally feeding paper tape into the computer, the way it was done in the earliest days of computing. Programming directly in assembly — being “close to the metal” as some programmers refer to it — made Unix blazing fast and memory efficient. The process, on the other hand, was laborious and prone to errors.

Ritchie’s other option was to use B, an interpreted programming language developed by his co-worker Ken Thompson. B was much simpler to code with, several steps abstracted from the bare metal. However, it lacked features Ritchie felt were crucial. B also suffered under the weight of its own design; it was slow to execute and lacked the resilience needed for time-sharing environments.

Ritchie’s solution was to chose neither. Instead, he created a compiled programming language with many of the same features as B, but with more access to the kinds of things you could expect from assembly code. That language is called C.

By the time Unix shipped, it had been fully rewritten in C, and the programming language came bundled in every operating system that ran on top of it, which, as it turned out, was a lot of them. As more programmers tried C, they adapted to it quickly. It blended, as some might say, perfectly abstract functions and methods for creating predictable software patterns with the ability to get right down to the metal if needed. It isn’t prescriptive, but it doesn’t leave you completely lost. Saron Yitabrek, host of the Command Heroes podcast, describes C as “a nearly universal tool for programming; just as capable on a personal computer as it was on a supercomputer.”

C has been called a Swiss Army language. There is very little it can’t do, and very little that hasn’t been done with it. Computer scientist Bill Dally once said, “It set the tone for the way that programming was done for several decades.” And that’s true. Many of the programming paradigms developed in the latter half of the 20th century originated in C. Compilers were developed beyond Unix, available in every operating system. Rob Pike, a software engineer involved in the development of Unix, and later Go, has a much simpler way of putting it. “C is a desert island language.”

Ritchie has a saying of his own he was fond of repeating. “C has all the elegance and power of assembly language with all the readability and maintainability of… assembly language.” C is not necessarily everyone’s favorite programming language, and there are plenty of problems with it. (C#, created in the early 2000s, was one of many attempts to improve it.) However, as it proliferated out into the world, bundled in Unix-like operating systems like X-Windows, Linux, and Mac OSX, software developers turned to it as a way to speak to one another. It became a kind of common tongue. Even if you weren’t fluent, you could probably understand the language conversationally. If you needed to bundle up and share a some code, C was a great way to do it.

In 1993, Jean-François Groff and Sir Tim Berners-Lee had to release a package with all of the technologies of the web. It could be used to build web servers or browsers. They called it libwww, and released it to the public domain. It was written in C.


Think about the first time you browsed the web. That first webpage. Maybe it was a rich experience, filled with images, careful design and content you couldn’t find anywhere else. Maybe it was unadorned, uninteresting, and brief. No matter what that page was, I’d be willing to bet that it had some links. And when you clicked that link, there was magic. Suddenly, a fresh page arrives on your screen. You are now surfing the web. And in that moment you understand what the web is.

Sir Tim Berners-Lee finished writing the first web browser, WorldWideWeb, in the final days of 1990. It ran on his NeXT machine, and had read and write capabilities (the latter of which could be used to manage a homepage on the web). The NeXTcube wasn’t the heaviest computer you’ve ever seen, but it was still a desktop. That didn’t stop Berners-Lee from lugging it from conference to conference so he could plug it in and show people the web.

Again and again, he ran into the same problem. It will seem obvious to us now when considering the difficulty of demonstrating a globally networked hypertext application running on a little-used operating system (NeXT) on a not-widely-owned computer (NeXT Computer System) alone at a conference without the Internet. The problem came after the demo with the inevitable question: how can I start using it? The web lacks its magic if you can’t connect to the network yourself. It’s entirely useless isolated on a single computer. To make the idea click, Berners-Lee need to get everybody surfing the web. And he couldn’t very well lend his computer out to anybody that wanted to use it.

That’s where Nicola Pellow came in. An undergraduate at Leicester Polytechnic, Pellow was still an intern at CERN. She was assigned to Berners-Lee’s and Calliau’s team, so they tasked her with building an interoperable browser that could be installed anywhere. The fact that she had no background in programming (she was studying mathematics) and that she was at CERN as part of an internship didn’t concern her much. Within a couple of months she picked up a bit of C programming and built the Line Mode Browser.

Using the Line Mode Browser today, you would probably feel like a hacker from the 1980s. It was a text-only browser designed to run from a command line terminal. In most cases, just plain white text on a black background, pixels bleeding from edge to edge. Typing out a web address into the browser would bring up that website’s text on the screen. The up and down arrows on a keyboard could be used for navigation. Links were visible as a numbered list, and one could jump from site to site by entering the right number.

It was designed that way for a reason. Its simplicity guaranteed interoperability. The Line Mode Browser holds the unique distinction of being the only browser for many years to be platform-agnostic. It could be installed anywhere, on just about any computer or operating system. It made getting online easy, provided you knew what to do once you installed it. Pellow left CERN a few months after she released the Line Mode Browser. She returned after graduation, and helped build the first Mac browser.

Almost soon as Pellow left, Berners-Lee and Cailliau wrangled another recruit. Jean-François Groff was working at CERN, one office over. A programmer for years, Groff had written the French translation of the official C Programming Guide by Brian Kernighan and the language’s creator, Dennis Ritchie. He was working on a bit of physics software for UNIX systems when he got a chance to see what Berners-Lee was working on.

Not everybody understood what the web was going for. It can be difficult to grasp without the worldwide picture we have today. Groff was not one of those people. He longed for something just like the web. He understood perfectly what the web could be. Almost as soon as he saw a demo, he requested a transfer to the team.

He noticed one problem right away. “So this line mode browser, it was a bit of a chicken and egg problem,” he once described in an interview, “because to use it, you had to download the software first and install it and possibly compile it.” You had to use the web to download a web browser, but you needed a web browser to use the web. Groff found a clever solution. He built a simple mechanism that allowed users to telnet in to the NeXT server and browse the web using its built-in Line Mode Browser. So anyone in the world could remotely access the web without even needing to install the browser. Once they were able to look around, Groff hoped, they’d be hooked.

But Groff wanted to take it one step further. He came from UNIX systems, and C programming. C is a desert island language. Its versatility makes it invaluable as a one-size-fits-all solution. Groff wanted the web to be a desert island platform. He wanted it to be used in ways he hadn’t even imagined yet, ways that scientists at research institutions couldn’t even fathom. The one medium you could do anything with. To do that, he would need to make the web far more portable.

Working alongside Berners-Lee, Groff began pulling out the essential elements of the NeXT browser and porting them to the C programming language. Groff chose C not only because he was familiar with it, but because he knew most other programmers would be as well. Within a few months, he had built the libwww package (its official title would come a couple of years later). The libwww package was a set of common components for making graphical browsers. Included was the necessary code for parsing HTML, processing HTTP requests and rendering pages. It also provided a starting point for creating browser UI, and tools for embedding browser history and managing graphical windows.

Berners-Lee announced the web to the public for the first time on August 7, 1991. He posted a brief description along with a simple note:

If you’re interested in using the code, mail me. It’s very prototype, but available by anonymous FTP from info.cern.ch. It’s copyright CERN but free distribution and use is not normally a problem.

If you were to email Sir Tim Berners-Lee, he’d send you back the libwww package.

By November of 1992, the library had fully matured into a set of reusable tools. When CERN put the web in the public domain the following year, its terms included the libwww package. By 1993, anyone with a bit of time on their hands and a C compiler could create their own browser.

Before he left CERN to become one of the first web consultants, Groff did one final thing. He created a new mailing list, called www-talk, for a new generation of browser developers to talk shop.


On December 13, 1991 — almost a year after Berners-Lee had put the finishing touches on the first ever browser — Pei-Yuan Wei posted to the www-talk mailing list. After a conversation with Berners-Lee, he had built a browser called ViolaWWW. In a few months, it would be the most popular of the early browsers. In the middle of his post, Wei offhandedly — in a tone that would come off as bragging if it weren’t so sincere — mentioned that the browser build was a one night hack.

A one night hack. Not even Berners-Lee or Pellow could pull that off. Wei continued the post with the reasons he was able to get it up and running so quickly. But that nuance would be lost to history. What programmers would remember is that the it only took one day to build a browser. It was “hacked” together and shipped to the world, buggy, but usable. That phrase would set the tone and pace of browser development for at least the next decade. It is arguably the dominant ideology among browser makers today.

The irony is the opposite was true. ViolaWWW was the product of years of work that simply culminated in a single night. Wei is a great software programmer. But he also had all the pieces he needed before the night even started.

Pei-Yuan Wei has made a few appearances on the frontlines of web history. Apart from the ViolaWWW browser, he was hired by Dale Dougherty to work on an early version of GNN.com, the first commercial website. He was at a meeting of web pioneers the day the idea of the W3C was first discussed. In 2012, he was on the list of witnesses to speak in court to the many dangers of the Stop Online Privacy Act (SOPA). In the web’s early history Wei was a persistent presence.

Wei was a student at UCLA Berkley in the early 90s. It was HyperCard that set off his fascination with hypertext software. HyperCard was an application built for the Mac operating system in the late 80s. It allowed its users to create stacks of virtual “cards,” each with a bit of info. Users could then connect these cards however they wanted, and quickly sort, search, and navigate through their stacks. People used it to organize their recipes, replace their Rolodexes, organize research notes, and a million other things. HyperCard is the kind of software that attracts a person who demands a certain level of digital meticulousness, the kind of user that organizes their desktop folders into neat sections and precisely tags their data. This core group of power users manipulated the software using its built-in scripting language, HyperScript, to extend it to new heights.

Wei had just glimpsed Hypercard before he knew he needed to use it. But he was on an X-Windows computer, and HyperCard could only run on a Mac. Wei was not to be deterred. Instead of buying a Mac computer (an expensive but reasonable solution the problem) Wei began to write software of his own. He even went one step further. Wei began by creating his very own programming language. He called it Viola, and the first thing he built with it was a HyperCard clone.

Wei felt that the biggest limitation of HyperCard — and by extension his own hypertext software — was that it lacked access to a network. What good was data if it was locked up inside of a single computer? By the time he had reached that conclusion, it was nearing the end of 1991, around the time he saw a mention of the World Wide Web. So one night, he took Viola, combined it with libwww, and built a web browser. ViolaWWW was officially released.

ViolaWWW was built so quickly because most of it was already done by the time Wei found out about the web project. The Viola programming language was in the works for a couple of years at that point. It had already been built to accept hyperlinks and hypermedia for the HyperCard clone. It had been built to be extendable to other possible applications. Once Wei was able to pick apart libwww, he ported his software to read HTML, which itself was still a preposterously simple language. And that piece, the final tip of the iceberg, only took him a single night.

ViolaWWW would be the site of a lot of experimentation on the early web. Wei was the first to include an early version of stylesheets. He added a bookmarking function. The browser supported forms and embedded media. In a prescient move, Wei also included downloadable applets, allowing fairly advanced applications running inside of the browser. This became the template for what would eventually become Java applets.

For X-Windows users, ViolaWWW was the most popular browser on the market. Until the next thing came along.


Releasing a browser in the early 90s was almost a rite of passage. There was a useful exercise in downloading the libwww package and opening it up in your text editor. The web wasn’t all that complicated: there was a bit of code for rendering HTML, and processing HTTP requests from web servers (or other origins, like FTP or Gopher). Programmers of the web used a browser project as a way of getting familiar with its features. It was kind of like the “Hello World” of the early web.

In June of 1993, there were 130 websites in the entire world. There was easily a dozen browsers to chose from. That’s roughly one browser for every ten websites.

This rapid development of browsers was driven by the nature of innovation in the web community. When Berners-Lee put the web in the public domain, he did more than just give it to the world. He put openness at the center of its ideology. It would take five years — with the release of Netscape — for the web to get its first commercial browser. Until then, the “browser makers” were a small community of programmers talking things out the www-talk mailing list trying to make web browsing feel as revolutionary as they wanted it to be.

Some of the earliest projects ported one browser to another operating system. Occasionally, one of the browser makers would spontaneously release something that now feels essential. The first PDF rendering inside of a browser window was a part of the Midas browser. HTML tables were introduced and properly laid out in another called Arena. Tabbed browsing was a prominent feature in InternetWorks. All of these features were developed before 1995.

Most early browsers have faded into obscurity. But the people behind them didn’t. Counted among the earliest browser makers are future employees at Netscape, members of the W3C and the web standards movement, the inventor of cookies (and the blink tag), and the creators of some of the most important websites of the early web.

Of course, no one knew that at the time. To most of the creators, it was simply an exercise in making something cool they could pass along to their Internet friends.


The New York Times introduced its readers to the web on December 8, 1993. “Think of it as a map to the buried treasures of the Information Age,” read the first line. But the “map” the writer was referring to — one he would spend the first half of the article describing — wasn’t the World Wide Web; it was its most popular browser. A browser called Mosaic.

Mosaic was created, in part, by Marc Andreessen. Like many of the early web pioneers, Andreessen is a man of lofty ambition. He is drawn to big ideas and grand statements (he once said that software will “eat the world”). In college, he was known for being far more talkative than your average software engineer, chatting it up about the next bing thing.

Andreessen has had a decades-long passion for technology. Years later, he would capture the imagination of the public with the world’s first commercial browser: Netscape Navigator. He would grace the cover of Time magazine. He would become a cornerstone of Silicon Valley, define its rapid “ship first, think later” ethos for years, and seek and capture his fortune in the world of venture capital.

But Mosaic’s story does not begin with a commanding legend of Silicon Valley overseeing, for better or worse, the future of technology. It begins with a restless college student.

When Sir Tim Berners-Lee posted the initial announcement about the web, about a year before the article in The New York Times, Andreessen was an undergraduate student at the University of Illinois. While he attended school he worked at the university-affiliated computing lab known as the National Center for Supercomputing Applications (NCSA). NCSA occupied a similar space as ARPA in that they both were state-sponsored projects without an explicit goal other than to further the science of computing. If you worked at NCSA, it was possible to move from project to project without arising too much suspicion from the higher ups.

Andreessen was supposed to be working on visualization software, which he had found a way to run mostly on auto-pilot. In his spare time, Andreessen would ricochet around the office listening to everyone about what it was they were interested in. It was during one of those sessions that a colleague introduced him to the World Wide Web. He was immediately taken aback. He downloaded the ViolaWWW browser, and within a few days he had decided that the web would be his primary focus. He decided something else too. He needed to make a browser of his own.

In 1992, browsers could be cumbersome software. They lacked the polish and the conventions of modern browsers without decades of learning to build off of. They were difficult to download and install, often requiring users to make modifications to system files. And early browser makers were so focused on developing the web they didn’t think too much about the visual interface of their software.

Andreessen wanted to build a well-designed, performant, easy-to-install browser while simultaneously building on the features that Wei was adding to the ViolaWWW browser. He pitched his idea to a programmer at NCSA, Eric Bina. “Marc’s a very good salesman,” Bina would later recall, so he joined up.

Taking their cue from the pace of others, Andreessen and Bina finished the first version of the Mosaic browser in just a few weeks. It was available for X Windows computers. To announce the browser, Andreessen posted a download link to the www-talk mailing list, with the message “By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released.” The web got more than just a popular browser. It got its first pitchman.

That first version of the browser was impressive in a somewhat crowded field. To be sure, it had forms and some media support early on. But it wasn’t the best browser, nor was it the most advanced browser. Instead, Andreessen and Bina focused on something else entirely. Mosaic set itself apart because it was the easiest to use. The installation process was simple and the interface was, relatively speaking, intuitive.

The Mosaic browser’s secret weapon was its iteration. Before long, other programmers at NCSA wanted in on the project. They parceled off different operating systems to port the browser to. One team took the Mac, another Windows. By the fall of 1993, a few months after its initial release, Mosaic had feature-paired versions on Mac, Windows and Unix systems, as well as compatible server software.

After that, the pace of development only accelerated. Beta versions were released often and were available to download via FTP. New features were added at a rapid pace and new versions seemed to ship every week. The NCSA Mosaic was fully engaged with the web community, active in the www-talk mailing list, talking with users and gathering bug reports. It was not at all unusual to submit a bug report and hear back a few hours later from an NCSA programmer with a fix.

Andreessen was a particularly active presence, posting to threads almost daily. When the Mosaic team decided they might want to collect anonymous analytics about browser usage, Andreessen polled the www-talk list to see if it was a good idea. When he got a lot of questions about how to use HTML, he wrote a guide for beginners.

When one Mosaic user posted some issues he was having, it led to a tense back and forth between that user and Andreessen. He claimed he wasn’t a customer, and Andreessen shouldn’t care too much about what he thought. Andreessen replied, “We do care what you think simply because having the wonderful distributed beta team that we essentially have due to this group gives us the opportunity to make our product much better than it could be otherwise.” What Andreessen understood better than any of the early browser makers was that Mosaic was a product, and feedback from his users could drive its development. If they kept the feedback loop tight, they could keep the interface clean and bug-free while staying on the cutting edge of new features. It was the programming parable given enough eyeballs, all bugs are shallow come to life in browser development.

There was an electricity to Mosaic development at NCSA. Internal competition fueled OS teams to get features out the door. Sometimes the Mac version would get to something first. Sometimes it was Bina and Andreessen continuing to work on X-Mosaic. “We would get together, middle of the night, and come up with some cool idea — images was an example of that — then we would go off and race and see who would do it first,” creator of the Windows version of Mosaic Jon Mittelhauser later recalled. Sometimes, the features were duds and would hardly go anywhere at all. Other times, as Mittelhauser points out, they were absolutely essential.

In the months after launch, they started to surpass the feature list of even their nearest competitor, ViolaWWW. They added forms support and rich media. They added bookmarks for users to keep track of their links. They even created their own “What’s New” page, updated every single day, which tracked the web’s most popular links. When you opened up Mosaic, the NCSA What’s New page was the first thing you saw. They weren’t just building a browser. They were building a window to the web.

As Mittelhauser points out, it was the <img> tag which became Mosaic’s defining feature. It succeeded in doing two things. The tag was added without input from Sir Tim Berners-Lee or the wider web community. (Andreessen posted a note to www-talk only after it had already been implemented.) So firstly, that set the Mosaic team in a conflict with other browser makers and some parts of the web community that would last for years.

Secondly, it made Mosaic infinitely more popular. The <img> tag allowed for images to be embedded directly inline in the Mosaic browser. People found the web boring to browse. It was sterile, rigid, and scientific. Inline images changed all that. Within a few months, a new class of web designer was beginning to experiment with what was possible with images on the web. In some ways, it was the tag that made the web famous.

The image tag prompted the feature in The New York Times, and a subsequent write-up in Wired. By the time the press got around to talking about the web, Mosaic was the most popular browser and became a surrogate for the larger web world. “Mosaic” was to browsing the web as “Google” is to searching now.

Ultimately, the higher ups got involved. NCSA was not a tech company. They were a supercomputing lab. They came in to help make the Mosaic browser more cohesive, and maybe, more profitable. Licenses were parceled out to a dozen or so companies. Mosaic was bundled into Spry’s Internet in a Box product. It was embedded in enterprise software by the Santa Cruz Operation.

In the end, Mosaic split off into two directions. Pressure from management pushed Andreessen to leave and start a new company. It would be called Netscape. Another of the licensees of the software was a company called Spyglass. They were beginning to have talks with Microsoft. Both would ultimately choose to rewrite the Mosaic browser from scratch, for different reasons. Yet that browser would be their starting point and their products would have lasting implications on the browser market for decades as the world began to see its first commercial browsers.


The post Chapter 2: Browsers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 1: Birth

Tim Berners-Lee is fascinated with information. It has been his life’s work. For over four decades, he has sought to understand how it is mapped and stored and transmitted. How it passes from person to person. How the seeds of information become the roots of dramatic change. It is so fundamental to the work that he has done that when he wrote the proposal for what would eventually become the World Wide Web, he called it “Information Management, a Proposal.”

Information is the web’s core function. A series of bytes stream across the world and at the end of it is knowledge. The mechanism for this transfer — what we know as the web — was created by the intersection of two things. The first is the Internet, the technology that makes it all possible. The second is hypertext, the concept that grounds its use. They were brought together by Tim Berners-Lee. And when he was done he did something truly spectacular. He gave it away to everyone to use for free.

When Berners-Lee submitted “Information Management, a Proposal” to his superiors, they returned it with a comment on the top that read simply:

Vague, but exciting…

The web wasn’t a sure thing. Without the hindsight of today it looked far too simple to be effective. In other words, it was a hard sell. Berners-Lee was proficient at many things, but he was never a great salesman. He loved his idea for the web. But he had to convince everybody else to love it too.


Tim Berners-Lee has a mind that races. He has been known — based on interviews and public appearances — to jump from one idea to the next. He is almost always several steps ahead of what he is saying, which is often quite profound. Until recently, he only gave a rare interview here and there, and masked his greatest achievements with humility and a wry British wit.

What is immediately apparent is that Tim Berners-Lee is curious. Curious about everything. It has led him to explore some truly revolutionary ideas before they became truly revolutionary. But it also means that his focus is typically split. It makes it hard for him to hold on to things in his memory. “I’m certainly terrible at names and faces,” he once said in an interview. His original fascination with the elements for the web came from a very personal need to organize his own thoughts and connect them together, disparate and unconnected as they are. It is not at all unusual that when he reached for a metaphor for that organization, he came up with a web.

As a young boy, his curiosity was encouraged. His parents, Conway Berners-Lee and Mary Lee Woods, were mathematicians. They worked on the Ferranti Mark I, the world’s first commercially available computer, in the 1950s. They fondly speak of Berners-Lee as a child, taking things apart, experimenting with amateur engineering projects. There was nothing that he didn’t seek to understand further. Electronics — and computers specifically — were particularly enchanting.

Berners-Lee sometimes tells the story of a conversation he had with his with father as a young boy about the limitations of computers making associations between information that was not intrinsically linked. “The idea stayed with me that computers could be much more powerful,” Berners-Lee recalls, “if they could be programmed to link otherwise unconnected information. In an extreme view, the world can been seen as only connections.” He didn’t know it yet, but Berners-Lee had stumbled upon the idea of hypertext at a very early age. It would be several years before he would come back to it.


History is filled with attempts to organize knowledge. An oft-cited example is the Library of Alexandria, a fabled library of Ancient Greece that was thought to have had tens of thousands of meticulously organized texts.

Photo via

At the turn of the century, Paul Otlet tried something similar in Belgium. His project was called the Répertoire Bibliographique Universel (Universal Bibliography). Otlet and a team of researchers created a library of over 15 million index cards, each with a discrete and small piece of information in topics ranging from science to geography. Otlet devised a sophisticated numbering system that allowed him to link one index card to another. He fielded requests from researchers around the world via mail or telegram, and Otlet’s researchers could follow a trail of linked index cards to find an answer. Once properly linked, information becomes infinitely more useful.

A sudden surge of scientific research in the wake of World War II prompted Vanneaver Bush to propose another idea. In his groundbreaking essay in The Atlantic in 1945 entitled “As We May Think,” Bush imagined a mechanical library called a Memex. Like Otlet’s Universal Bibliography, the Memex stored bits of information. But instead of index cards, everything was stored on compact microfilm. Through the process of what he called “associative indexing,” users of the Memex could follow trails of related information through an intricate web of links.

The list of attempts goes on. But it was Ted Neslon who finally gave the concept a name in 1968, two decades after Bush’s article in The Atlantic. He called it hypertext.

Hypertext is, essentially, linked text. Nelson observed that in the real world, we often give meaning to the connections between concepts; it helps us grasp their importance and remember them for later. The proximity of a Post-It to your computer, the orientation of ingredients in your refrigerator, the order of books on your bookshelf. Invisible though they may seem, each of these signifiers hold meaning, whether consciously or subconsciously, and they are only fully realized when taking a step back. Hypertext was a way to bring those same kinds of meaningful connections to the digital world.

Nelson’s primary contribution to hypertext is a number of influential theories and a decades-long project still in progress known as Xanadu. Much like the web, Xanadau uses the power of a network to create a global system of links and pages. However, Xanadu puts a far greater emphasis on the ability to trace text to its original author for monetization and attribution purposes. This distinction, known as transculsion, has been a near impossible technological problem to solve.

Nelson’s interest in hypertext stems from the same issue with memory and recall as Berners-Lee. He refers to it is as his hummingbird mind. Nelson finds it hard to hold on to associations he creates in the real world. Hypertext offers a way for him to map associations digitally, so that he can call on them later. Berners-Lee and Nelson met for the first time a couple of years after the web was invented. They exchanged ideas and philosophies, and Berners-Lee was able to thank Nelson for his influential thinking. At the end of the meeting, Berners-Lee asked if he could take a picture. Nelson, in turn, asked for a short video recording. Each was commemorating the moment they knew they would eventually forget. And each turned to technology for a solution.

By the mid-80s, on the wave of innovation in personal computing, there were several hypertext applications out in the wild. The hypertext community — a dedicated group of software engineers that believed in the promise of hypertext – created programs for researchers, academics, and even off-the-shelf personal computers. Every research lab worth their weight in salt had a hypertext project. Together they built entirely new paradigms into their software, processes and concepts that feel wonderfully familiar today but were completely outside the realm of possibilities just a few years earlier.

At Brown University, the very place where Ted Nelson was studying when he coined the term hypertext, Norman Meyrowitz, Nancy Garrett, and Karen Catlin were the first to breathe life into the hyperlink, which was introduced in their program Intermedia. At Symbolics, Janet Walker was toying with the idea of saving links for later, a kind of speed dial for the digital world – something she was calling a bookmark. At the University of Maryland, Ben Schneiderman sought to compile and link the world’s largest source of information with his Interactive Encyclopedia System.

Dame Wendy Hall, at the University of Southhampton, sought to extend the life of the link further in her own program, Microcosm. Each link made by the user was stored in a linkbase, a database apart from the main text specifically designed to store metadata about connections. In Microcosm, links could never die, never rot away. If their connection was severed they could point elsewhere since links weren’t directly tied to text. You could even write a bit of text alongside links, expanding a bit on why the link was important, or add to a document separate layers of links, one, for instance, a tailored set of carefully curated references for experts on a given topic, the other a more laid back set of links for the casual audience.

There were mailing lists and conferences and an entire community that was small, friendly, fiercely competitive and locked in an arms race to find the next big thing. It was impossible not to get swept up in the fervor. Hypertext enabled a new way to store actual, tangible knowledge; with every innovation the digital world became more intricate and expansive and all-encompassing.

Then came the heavy hitters. Under a shroud of mystery, researchers and programmers at the legendary Xerox PARC were building NoteCards. Apple caught wind of the idea and found it so compelling that they shipped their own hypertext application called Hypercard, bundled right into the Mac operating system. If you were a late Apple II user, you likely have fond memories of Hypercard, an interface that allowed you to create a card, and quickly link it to another. Cards could be anything, a recipe maybe, or the prototype of a latest project. And, one by one, you could link those cards up, visually and with no friction, until you had a digital reflection of your ideas.

Towards the end of the 80s, it was clear that hypertext had a bright future. In just a few short years, the software had advanced in leaps and bounds.


After a brief stint studying physics at The Queen’s College, Oxford, Tim Berners-Lee returned to his first love: computers. He eventually found a short-term, six-month contract at the particle physics lab Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research), or simply, CERN.

CERN is responsible for a long line of particle physics breakthroughs. Most recently, they built the Large Hadron Collider, which led to the confirmation of the Higgs Boson particle, a.k.a. the “God particle.”

CERN doesn’t operate like most research labs. Its internal staff makes up only a small percentage of the people that use the lab. Any research team from around the world can come and use the CERN facilities, provided that they are able to prove their research fits within the stated goals of the institution. A majority of CERN occupants are from these research teams. CERN is a dynamic, sprawling campus of researchers, ferrying from location to location on bicycles or mine-carts, working on the secrets of the universe. Each team is expected to bring their own equipment and expertise. That includes computers.

Berners-Lee was hired to assist with software on an earlier version of the particle accelerator called the Proton Synchrotron. When he arrived, he was blown away by the amount of pure, unfiltered information that flowed through CERN. It was nearly impossible to keep track of it all and equally impossible to find what you were looking for. Berners-Lee wanted to capture that information and organize it.

His mind flashed back to that conversation with his father all those years ago. What if it were possible to create a computer program that allowed you to make random associations between bits of information? What if you could, in other words, link one thing to another? He began working on a software project on the side for himself. Years later, that would be the same way he built the web. He called this project ENQUIRE, named for a Victorian handbook he had read as a child.

Using a simple prompt, ENQUIRE users could create a block of info, something like Otlet’s index cards all those years ago. And just like the Universal Bibliography, ENQUIRE allowed you to link one block to another. Tools were bundled in to make it easier to zoom back and see the connections between the links. For Berners-Lee this filled a simple need: it replaced the part of his memory that made it impossible for him to remember names and faces with a digital tool.

Compared to the software being actively developed at the University of Southampton or at Xerox or Apple, ENQUIRE was unsophisticated. It lacked a visual interface, and its format was rudimentary. A program like Hypercard supported rich-media and advanced two-way connections. But ENQUIRE was only Berners-Lee’s first experiment with hypertext. He would drop the project when his contract was up at CERN.

Berners-Lee would go and work for himself for several years before returning to CERN. By the time he came back, there would be something much more interesting for him to experiment with. Just around the corner was the Internet.


Packet switching is the single most important invention in the history of the Internet. It is how messages are transmitted over a globally decentralized network. It was discovered almost simultaneously in the late-60s by two different computer scientists, Donald Davies and Paul Baran. Both were interested in the way in which it made networks resilient.

Traditional telecommunications at the time were managed by what is known as circuit switching. With circuit switching, a direct connection is open between the sender and receiver, and the message is sent in its entirety between the two. That connection needs to be persistent and each channel can only carry a single message at a time. That line stays open for the duration of a message and everything is run through a centralized switch. 

If you’re searching for an example of circuit switching, you don’t have to look far. That’s how telephones work (or used to, at least). If you’ve ever seen an old film (or even a TV show like Mad Men) where an operator pulls a plug out of a wall and plugs it back in to connect a telephone call, that’s circuit switching (though that was all eventually automated). Circuit switching works because everything is sent over the wire all at once and through a centralized switch. That’s what the operators are connecting.

Packet switching works differently. Messages are divided into smaller bits, or packets, and sent over the wire a little at a time. They can be sent in any order because each packet has just enough information to know where in the order it belongs. Packets are sent through until the message is complete, and then re-assembled on the other side. There are a few advantages to a packet-switched network. Multiple messages can be sent at the same time over the same connection, split up into little packets. And crucially, the network doesn’t need centralization. Each node in the network can pass around packets to any other node without a central routing system. This made it ideal in a situation that requires extreme adaptability, like in the fallout of an atomic war, Paul Baran’s original reason for devising the concept.

When Davies began shopping around his idea for packet switching to the telecommunications industry, he was shown the door. “I went along to Siemens once and talked to them, and they actually used the words, they accused me of technical — they were really saying that I was being impertinent by suggesting anything like packet switching. I can’t remember the exact words, but it amounted to that, that I was challenging the whole of their authority.” Traditional telephone companies were not at all interested in packet switching. But ARPA was.

ARPA, later known as DARPA, was a research agency embedded in the United States Department of Defense. It was created in the throes of the Cold War — a reaction to the launch of the Sputnik satellite by Russia — but without a core focus. (It was created at the same time as NASA, so launching things into space was already taken.) To adapt to their situation, ARPA recruited research teams from colleges around the country. They acted as a coordinator and mediator between several active university research projects with a military focus.

ARPA’s organization had one surprising and crucial side effect. It was comprised mostly of professors and graduate students who were working at its partner universities. The general attitude was that as long as you could prove some sort of modest relation to a military application, you could pitch your project for funding. As a result, ARPA was filled with lots of ambitious and free-thinking individuals working inside of a buttoned-up government agency, with little oversight, coming up with the craziest and most world-changing ideas they could. “We expected that a professional crew would show up eventually to take over the problems we were dealing with,” recalls Bob Kahn, an ARPA programmer critical to the invention of the Internet. The “professionals” never showed up.

One of those professors was Leonard Kleinrock at UCLA. He was involved in the first stages of ARPANET, the network that would eventually become the Internet. His job was to help implement the most controversial part of the project, the still theoretical concept known as packet switching, which enabled a decentralized and efficient design for the ARPANET network. It is likely that the Internet would not have taken shape without it. Once packet switching was implemented, everything came together quickly. By the early 1980s, it was simply called the Internet. By the end of the 1980s, the Internet went commercial and global, including a node at CERN.

Once packet switching was implemented, everything came together quickly. By the early 1980s, it was simply called the Internet.

The first applications of the Internet are still in use today. FTP, used for transferring files over the network, was one of the first things built. Email is another one. It had been around for a couple of decades on a closed system already. When the Internet began to spread, email became networked and infinitely more useful.

Other projects were aimed at making the Internet more accessible. They had names like Archie, Gopher, and WAIS, and have largely been forgotten. They were united by a common goal of bringing some order to the chaos of a decentralized system. WAIS and Archie did so by indexing the documents put on the Internet to make them searchable and findable by users. Gopher did so with a structured, hierarchical system. 

Kleinrock was there when the first message was ever sent over the Internet. He was supervising that part of the project, and even then, he knew what a revolutionary moment it was. However, he is quick to note that not everybody shared that feeling in the beginning. He recalls the sentiment held by the titans of the telecommunications industry like the Bell Telephone Company. “They said, ‘Little boy, go away,’ so we went away.” Most felt that the project would go nowhere, nothing more than a technological fad.

In other words, no one was paying much attention to what was going on and no one saw the Internet as much of a threat. So when that group of professors and graduate students tried to convince their higher-ups to let the whole thing be free — to let anyone implement the protocols of the Internet without a need for licenses or license fees — they didn’t get much pushback. The Internet slipped into public use and only the true technocratic dreamers of the late 20th century could have predicted what would happen next.


Berners-Lee returned to CERN in a fellowship position in 1984. It was four years after he had left. A lot had changed. CERN had developed their own network, known as CERNET, but by 1989, they arrived and hooked up to the new, internationally standard Internet. “In 1989, I thought,” he recalls, “look, it would be so much easier if everybody asking me questions all the time could just read my database, and it would be so much nicer if I could find out what these guys are doing by just jumping into a similar database of information for them.” Put another way, he wanted to share his own homepage, and get a link to everyone else’s.

What he needed was a way for researchers to share these “databases” without having to think much about how it all works. His way in with management was operating systems. CERN’s research teams all bring their own equipment, including computers, and there’s no way to guarantee they’re all running the same OS. Interoperability between operating systems is a difficult problem by design — generally speaking — the goal of an OS is to lock you in. Among its many other uses, a globally networked hypertext system like the web was a wonderful way for researchers to share notes between computers using different operating systems.

However, Berners-Lee had a bit of trouble explaining his idea. He’s never exactly been concise. By 1989, when he wrote “Information Management, a Proposal,” Berners-Lee already had worldwide ambitions. The document is thousands of words, filled with diagrams and charts. It jumps energetically from one idea to the next without fully explaining what’s just been said. Much of what would eventually become the web was included in the document, but it was just too big of an idea. It was met with a lukewarm response — that “Vague, but exciting” comment scrawled across the top.

A year later, in May of 1990, at the encouragement of his boss Mike Sendall (the author of that comment), Beners-Lee circulated the proposal again. This time it was enough to buy him a bit of time internally to work on it. He got lucky. Sendall understood his ambition and aptitude. He wouldn’t always get that kind of chance. The web needed to be marketed internally as an invaluable tool. CERN needed to need it. Taking complex ideas and boiling them down to their most salient, marketable points, however, was not Berners-Lee’s strength. For that, he was going to need a partner. He found one in Robert Cailliau.

Cailliau was a CERN veteran. By 1989, he’d worked there as a programmer for over 15 years. He’d embedded himself in the company culture, proving a useful resource helping teams organize their informational toolset and knowledge-sharing systems. He had helped several teams at CERN do exactly the kind of thing Berners-Lee was proposing, though at a smaller scale.

Temperamentally, Cailliau was about as different from Berners-Lee as you could get. He was hyper-organized and fastidious. He knew how to sell things internally, and he had made plenty of political inroads at CERN. What he shared with Berners-Lee was an almost insatiable curiosity. During his time as a nurse in the Belgian military, he got fidgety. “When there was slack at work, rather than sit in the infirmary twiddling my thumbs, I went and got myself some time on the computer there.” He ended up as a programmer in the military, working on war games and computerized models. He couldn’t help but look for the next big thing.

In the late 80s, Cailliau had a strong interest in hypertext. He was taking a look at Apple’s Hypercard as a potential internal documentation system at CERN when he caught wind of Berners-Lee’s proposal. He immediately recognized its potential.

Working alongside Berners-Lee, Cailliau pieced together a new proposal. Something more concise, more understandable, and more marketable. While Berners-Lee began putting together the technologies that would ultimately become the web, Cailliau began trying to sell the idea to interested parties inside of CERN.

The web, in all of its modern uses and ubiquity can be difficult to define as just one thing — we have the web on our refrigerators now. In the beginning, however, the web was made up of only a few essential features.

There was the web server, a computer wired to the Internet that can transmit documents and media (webpages) to other computers. Webpages are served via HTTP, a protocol designed by Berners-Lee in the earliest iterations of the web. HTTP is a layer on top of the Internet, and was designed to make things as simple, and resilient, as possible. HTTP is so simple that it forgets a request as soon as it has made it. It has no memory of the webpages its served in the past. The only thing HTTP is concerned with is the request it’s currently making. That makes it magnificently easy to use.

These webpages are sent to browsers, the software that you’re using to read this article. Browsers can read documents handed to them by server because they understand HTML, another early invention of Tim Berners-Lee. HTML is a markup language, it allows programmers to give meaning to their documents so that they can be understood. The “H” in HTML stands for Hypertext. Like HTTP, HTML — all of the building blocks programmers can use to structure a document — wasn’t all that complex, especially when compared to other hypertext applications at the time. HTML comes from a long line of other, similar markup languages, but Berners-Lee expanded it to include the link, in the form of an anchor tag. The <a> tag is the most important piece of HTML because it serves the web’s greatest function: to link together information.

The hyperlink was made possible by the Universal Resource Identifier (URI) later renamed to the Uniform Resource Indicator after the IETF found the word “universal” a bit too substantial. But for Berners-Lee, that was exactly the point. “Its universality is essential: the fact that a hypertext link can point to anything, be it personal, local or global, be it draft or highly polished,” he wrote in his personal history of the web. Of all the original technologies that made up the web, Berners-Lee — and several others — have noted that the URL was the most important.

By Christmas of 1990, Tim Berners-Lee had all of that built. A full prototype of the web was ready to go.

Cailliau, meanwhile, had had a bit of success trying to sell the idea to his bosses. He had hoped that his revised proposal would give him a team and some time. Instead he got six months and a single staff member, intern Nicola Pellow. Pellow was new to CERN, on placement for her mathematics degree. But her work on the Line Mode Browser, which enabled people from around the world using any operating system to browse the web, proved a crucial element in the web’s early success. Berners-Lee’s work, combined with the Line Mode Browser, became the web’s first set of tools. It was ready to show to the world.


When the team at CERN submitted a paper on the World Wide Web to the San Antonio Hypertext Conference in 1991, it was soundly rejected. They went anyway, and set up a table with a computer to demo it to conference attendees. One attendee remarked:

They have chutzpah calling that the World Wide Web!

The highlight of the web is that it was not at all sophisticated. Its use of hypertext was elementary, allowing for only simplistic text based links. And without two-way links, pretty much a given in hypertext applications, links could go dead at any minute. There was no linkbase, or sophisticated metadata assigned to links. There was just the anchor tag. The protocols that ran on top of the Internet were similarly basic. HTTP only allowed for a handful of actions, and alternatives like Gopher or WAIS offered far more options for advanced connections through the Internet network.

It was hard to explain, difficult to demo, and had overly lofty ambition. It was created by a man who didn’t have much interest in marketing his ideas. Even the name was somewhat absurd. “WWW” is one of only a handful of acronyms that actually takes longer to say than the full “World Wide Web.”

We know how this story ends. The web won. It’s used by billions of people and runs through everything we do. It is among the most remarkable technological achievements of the 20th century.

It had a few advantages, of course. It was instantly global and widely accessible thanks to the Internet. And the URL — and its uniqueness — is one of the more clever concepts to come from networked computing.

But if you want to truly understand why the web succeeded we have to come back to information. One of Berners-Lee’s deepest held beliefs is that information is incredibly powerful, and that it deserves to be free. He believed that the Web could deliver on that promise. For it to do that, the web would need to spread.

Berners-Lee looked to his successors for inspiration: the Internet. The Internet succeeded, in part, because they gave it away to everyone. After considering several licensing options, he lobbied CERN to release the web unlicensed to the general public. CERN, an organization far more interested in particle physics breakthroughs than hypertext, agreed. In 1993, the web officially entered the public domain.

And that was the turning point. They didn’t know it then, but that was the moment the web succeeded. When Berners-Lee was able to make globally available information truly free.

In an interview some years ago, Berners-Lee recalled how it was that the web came to be.

I had the idea for it. I defined how it would work. But it was actually created by people.

That may sound like humility from one of the world’s great thinkers — and it is that a little — but it is also the truth. The web was Berners-Lee’s gift to the world. He gave it to us, and we made it what it was. He and his team fought hard at CERN to make that happen.

Berners-Lee knew that with the resources available to him he would never be able to spread the web sufficiently outside of the hallways of CERN. Instead, he packaged up all the code that was needed to build a browser into a library called libwww and posted it to a Usenet group. That was enough for some people to get interested in browsers. But before browsers would be useful, you needed something to browse.


The post Chapter 1: Birth appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.