The Intersection of Speed and Proximity

Category Image 076

You ever find yourself in bumper-to-bumper traffic? I did this morning on the way to work (read: whatever cafe I fancy). There’s a pattern to it, right? Stop, go, stop, go, stop… it’s almost rhythmic and harmonious in the most annoying of ways. Everyone in line follows the dance, led by some car upfront, each subsequent vehicle pressed right up to the rear of the next for the luxury of moving a few feet further before the next step.

A closeup of three lanes of tight traffic from behind.
Photo by Jakob Jin

Have you tried breaking the pattern? Instead of playing shadow to the car in front of me this morning, I allowed space between us. I’d gradually raise my right foot off the brake pedal and depress the gas pedal only once my neighboring car gained a little momentum. At that point, my car begins to crawl. And continue crawling. I rarely had to tap the brakes at all once I got going. In effect, I had sacrificed proximity for a smoother ride. I may not be traveling the “fastest” in line, but I was certainly gliding along with a lot less friction.

I find that many things in life are like that. Getting closest to anything comes with a cost, be it financial or consequence. Want the VIP ticket to a concert you’re stoked as heck about? Pony up some extra cash. Want the full story rather than a headline? Just enter your email address. Want up-to-the-second information in your stock ticker? Hand over some account information. Want access to all of today’s televised baseball games? Pick up an ESPN+ subscription.

Proximity and speed are the commodities, the products so to speak. Closer and faster are what’s being sold.

You may have run into the “law of diminishing returns” in some intro-level economics class you took in high school or college. It’s the basis for a large swath of economic theory but in essence, is the “too much of a good thing” principle. It’s what AMPM commercials have been preaching this whole time.

I’m embedding the clip instead of linking it up because it clearly illustrates the “problem” of having too many of what you want (or need). Dude resorted to asking two teens to reach into his front pocket for his wallet because his hands were full, creeper. But buy on, the commercial says, because the implication is that there’s never too much of a good thing, even if it ends in a not-so-great situation chockfull of friction.

The only and only thing I took away from physics in college — besides gravity force being 9.8 m/s2 — is that there’s no way to have bigger, cheaper, and faster at the same time. You can take two, but all three cannot play together. For example, you can have a spaceship that’s faster and cheaper, but chances are that it ain’t gonna be bigger than a typical spaceship. If you were to aim for bigger, it’d be a lot less cheap, not only for the extra size but also to make the dang heavy thing go as fast as possible. It’s a good rule in life. I don’t have proof of it, but I’d wager Mick Jagger lives by it, or at least did at one time.

Speed. Proximity. Faster and slower. Closer and further. I’m not going to draw any parallels to web development, UX design, or any other front-end thing. They’re already there.


The Intersection of Speed and Proximity originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.



from CSS-Tricks https://ift.tt/oQHBawf
Gain $200 in a week
via Read more

Is Gutenberg Finally Winning Users Over? We Analyzed 340+ Opinions to Find Out

Category Image 091
opinions about gutenberg.pngOver the past few months, I’ve been diving deep into what people really think about WordPress’ block editor – Gutenberg. I thought this was going to be a fun project. I analyzed over 340 opinions from platforms like Reddit, Twitter, YouTube, and WordPress.org. I also spoke with developers, colleagues, and other professionals in the WordPress community to get a well-rounded perspective.

Regexes Got Good: The History And Future Of Regular Expressions In JavaScript

Category Image 080

Modern JavaScript regular expressions have come a long way compared to what you might be familiar with. Regexes can be an amazing tool for searching and replacing text, but they have a longstanding reputation (perhaps outdated, as I’ll show) for being difficult to write and understand.

This is especially true in JavaScript-land, where regexes languished for many years, comparatively underpowered compared to their more modern counterparts in PCRE, Perl, .NET, Java, Ruby, C++, and Python. Those days are over.

In this article, I’ll recount the history of improvements to JavaScript regexes (spoiler: ES2018 and ES2024 changed the game), show examples of modern regex features in action, introduce you to a lightweight JavaScript library that makes JavaScript stand alongside or surpass other modern regex flavors, and end with a preview of active proposals that will continue to improve regexes in future versions of JavaScript (with some of them already working in your browser today).

The History of Regular Expressions in JavaScript

ECMAScript 3, standardized in 1999, introduced Perl-inspired regular expressions to the JavaScript language. Although it got enough things right to make regexes pretty useful (and mostly compatible with other Perl-inspired flavors), there were some big omissions, even then. And while JavaScript waited 10 years for its next standardized version with ES5, other programming languages and regex implementations added useful new features that made their regexes more powerful and readable.

But that was then.

Did you know that nearly every new version of JavaScript has made at least minor improvements to regular expressions?

Let’s take a look at them.

Don’t worry if it’s hard to understand what some of the following features mean — we’ll look more closely at several of the key features afterward.

  • ES5 (2009) fixed unintuitive behavior by creating a new object every time regex literals are evaluated and allowed regex literals to use unescaped forward slashes within character classes (/[/]/).
  • ES6/ES2015 added two new regex flags: y (sticky), which made it easier to use regexes in parsers, and u (unicode), which added several significant Unicode-related improvements along with strict errors. It also added the RegExp.prototype.flags getter, support for subclassing RegExp, and the ability to copy a regex while changing its flags.
  • ES2018 was the edition that finally made JavaScript regexes pretty good. It added the s (dotAll) flag, lookbehind, named capture, and Unicode properties (via \p{...} and \P{...}, which require ES6’s flag u). All of these are extremely useful features, as we’ll see.
  • ES2020 added the string method matchAll, which we’ll also see more of shortly.
  • ES2022 added flag d (hasIndices), which provides start and end indices for matched substrings.
  • And finally, ES2024 added flag v (unicodeSets) as an upgrade to ES6’s flag u. The v flag adds a set of multicharacter “properties of strings” to \p{...}, multicharacter elements within character classes via \p{...} and \q{...}, nested character classes, set subtraction [A--B] and intersection [A&&B], and different escaping rules within character classes. It also fixed case-insensitive matching for Unicode properties within negated sets [^...].

As for whether you can safely use these features in your code today, the answer is yes! The latest of these features, flag v, is supported in Node.js 20 and 2023-era browsers. The rest are supported in 2021-era browsers or earlier.

Each edition from ES2019 to ES2023 also added additional Unicode properties that can be used via \p{...} and \P{...}. And to be a completionist, ES2021 added string method replaceAll — although, when given a regex, the only difference from ES3’s replace is that it throws if not using flag g.

Aside: What Makes a Regex Flavor Good?

With all of these changes, how do JavaScript regular expressions now stack up against other flavors? There are multiple ways to think about this, but here are a few key Aspects:

  • Performance.
    This is an important Aspect but probably not the main one since mature regex implementations are generally pretty fast. JavaScript is strong on regex performance (at least considering V8’s Irregexp engine, used by Node.js, Chromium-based browsers, and even Firefox; and JavaScriptCore, used by Safari), but it uses a backtracking engine that is missing any syntax for backtracking control — a major limitation that makes ReDoS vulnerability more common.
  • Support for advanced features that handle common or important use cases.
    Here, JavaScript stepped up its game with ES2018 and ES2024. JavaScript is now best in class for some features like lookbehind (with its infinite-length support) and Unicode properties (with multicharacter “properties of strings,” set subtraction and intersection, and script extensions). These features are either not supported or not as robust in many other flavors.
  • Ability to write readable and maintainable patterns.
    Here, native JavaScript has long been the worst of the major flavors since it lacks the x (“extended”) flag that allows insignificant whitespace and comments. Additionally, it lacks regex subroutines and subroutine definition groups (from PCRE and Perl), a powerful set of features that enable writing grammatical regexes that build up complex patterns via composition.

So, it’s a bit of a mixed bag.

JavaScript regexes have become exceptionally powerful, but they’re still missing key features that could make regexes safer, more readable, and more maintainable (all of which hold some people back from using this power).

The good news is that all of these holes can be filled by a JavaScript library, which we’ll see later in this article.

Using JavaScript’s Modern Regex Features

Let’s look at a few of the more useful modern regex features that you might be less familiar with. You should know in advance that this is a moderately advanced guide. If you’re relatively new to regex, here are some excellent tutorials you might want to start with:

Named Capture

Often, you want to do more than just check whether a regex matches — you want to extract substrings from the match and do something with them in your code. Named capturing groups allow you to do this in a way that makes your regexes and code more readable and self-documenting.

The following example matches a record with two date fields and captures the values:

const record = 'Admitted: 2024-01-01\nReleased: 2024-01-03';
const re = /^Admitted: (?<admitted>\d{4}-\d{2}-\d{2})\nReleased: (?<released>\d{4}-\d{2}-\d{2})$/;
const match = record.match(re);
console.log(match.groups);
/* → {
  admitted: '2024-01-01',
  released: '2024-01-03'
} */

Don’t worry — although this regex might be challenging to understand, later, we’ll look at a way to make it much more readable. The key things here are that named capturing groups use the syntax (?<name>...), and their results are stored on the groups object of matches.

You can also use named backreferences to rematch whatever a named capturing group matched via \k<name>, and you can use the values within search and replace as follows:

// Change 'FirstName LastName' to 'LastName, FirstName'
const name = 'Shaquille Oatmeal';
name.replace(/(?<first>\w+) (?<last>\w+)/, '$<last>, $<first>');
// → 'Oatmeal, Shaquille'

For advanced regexers who want to use named backreferences within a replacement callback function, the groups object is provided as the last argument. Here’s a fancy example:

function fahrenheitToCelsius(str) {
  const re = /(?<degrees>-?\d+(\.\d+)?)F\b/g;
  return str.replace(re, (...args) => {
    const groups = args.at(-1);
    return Math.round((groups.degrees - 32) * 5/9) + 'C';
  });
}
fahrenheitToCelsius('98.6F');
// → '37C'
fahrenheitToCelsius('May 9 high is 40F and low is 21F');
// → 'May 9 high is 4C and low is -6C'

Lookbehind

Lookbehind (introduced in ES2018) is the complement to lookahead, which has always been supported by JavaScript regexes. Lookahead and lookbehind are assertions (similar to ^ for the start of a string or \b for word boundaries) that don’t consume any characters as part of the match. Lookbehinds succeed or fail based on whether their subpattern can be found immediately before the current match position.

For example, the following regex uses a lookbehind (?<=...) to match the word “cat” (only the word “cat”) if it’s preceded by “fat ”:

const re = /(?<=fat )cat/g;
'cat, fat cat, brat cat'.replace(re, 'pigeon');
// → 'cat, fat pigeon, brat cat'

You can also use negative lookbehind — written as (?<!...) — to invert the assertion. That would make the regex match any instance of “cat” that’s not preceded by “fat ”.

const re = /(?<!fat )cat/g;
'cat, fat cat, brat cat'.replace(re, 'pigeon');
// → 'pigeon, fat cat, brat pigeon'

JavaScript’s implementation of lookbehind is one of the very best (matched only by .NET). Whereas other regex flavors have inconsistent and complex rules for when and whether they allow variable-length patterns inside lookbehind, JavaScript allows you to look behind for any subpattern.

The matchAll Method

JavaScript’s String.prototype.matchAll was added in ES2020 and makes it easier to operate on regex matches in a loop when you need extended match details. Although other solutions were possible before, matchAll is often easier, and it avoids gotchas, such as the need to guard against infinite loops when looping over the results of regexes that might return zero-length matches.

Since matchAll returns an iterator (rather than an array), it’s easy to use it in a for...of loop.

const re = /(?<char1>\w)(?<char2>\w)/g;
for (const match of str.matchAll(re)) {
  const {char1, char2} = match.groups;
  // Print each complete match and matched subpatterns
  console.log(Matched "${match[0]}" with "${char1}" and "${char2}");
}

Note: matchAll requires its regexes to use flag g (global). Also, as with other iterators, you can get all of its results as an array using Array.from or array spreading.

const matches = [...str.matchAll(/./g)];

Unicode Properties

Unicode properties (added in ES2018) give you powerful control over multilingual text, using the syntax \p{...} and its negated version \P{...}. There are hundreds of different properties you can match, which cover a wide variety of Unicode categories, scripts, script extensions, and binary properties.

Note: For more details, check out the documentation on MDN.

Unicode properties require using the flag u (unicode) or v (unicodeSets).

Flag v

Flag v (unicodeSets) was added in ES2024 and is an upgrade to flag u — you can’t use both at the same time. It’s a best practice to always use one of these flags to avoid silently introducing bugs via the default Unicode-unaware mode. The decision on which to use is fairly straightforward. If you’re okay with only supporting environments with flag v (Node.js 20 and 2023-era browsers), then use flag v; otherwise, use flag u.

Flag v adds support for several new regex features, with the coolest probably being set subtraction and intersection. This allows using A--B (within character classes) to match strings in A but not in B or using A&&B to match strings in both A and B. For example:

// Matches all Greek symbols except the letter 'π'
/[\p{Script_Extensions=Greek}--π]/v

// Matches only Greek letters
/[\p{Script_Extensions=Greek}&&\p{Letter}]/v

For more details about flag v, including its other new features, check out this explainer from the Google Chrome team.

A Word on Matching Emoji

Emoji are 🤩🔥😎👌, but how emoji get encoded in text is complicated. If you’re trying to match them with a regex, it’s important to be aware that a single emoji can be composed of one or many individual Unicode code points. Many people (and libraries!) who roll their own emoji regexes miss this point (or implement it poorly) and end up with bugs.

The following details for the emoji “👩🏻‍🏫” (Woman Teacher: Light Skin Tone) show just how complicated emoji can be:

// Code unit length
'👩🏻‍🏫'.length;
// → 7
// Each astral code point (above \uFFFF) is divided into high and low surrogates

// Code point length
[...'👩🏻‍🏫'].length;
// → 4
// These four code points are: \u{1F469} \u{1F3FB} \u{200D} \u{1F3EB}
// \u{1F469} combined with \u{1F3FB} is '👩🏻'
// \u{200D} is a Zero-Width Joiner
// \u{1F3EB} is '🏫'

// Grapheme cluster length (user-perceived characters)
[...new Intl.Segmenter().segment('👩🏻‍🏫')].length;
// → 1

Fortunately, JavaScript added an easy way to match any individual, complete emoji via \p{RGI_Emoji}. Since this is a fancy “property of strings” that can match more than one code point at a time, it requires ES2024’s flag v.

If you want to match emojis in environments without v support, check out the excellent libraries emoji-regex and emoji-regex-xs.

Making Your Regexes More Readable, Maintainable, and Resilient

Despite the improvements to regex features over the years, native JavaScript regexes of sufficient complexity can still be outrageously hard to read and maintain.

Regular Expressions are SO EASY!!!! pic.twitter.com/q4GSpbJRbZ

— Garabato Kid (@garabatokid) July 5, 2019


ES2018’s named capture was a great addition that made regexes more self-documenting, and ES6’s String.raw tag allows you to avoid escaping all your backslashes when using the RegExp constructor. But for the most part, that’s it in terms of readability.

However, there’s a lightweight and high-performance JavaScript library named regex (by yours truly) that makes regexes dramatically more readable. It does this by adding key missing features from Perl-Compatible Regular Expressions (PCRE) and outputting native JavaScript regexes. You can also use it as a Babel plugin, which means that regex calls are transpiled at build time, so you get a better developer experience without users paying any runtime cost.

PCRE is a popular C library used by PHP for its regex support, and it’s available in countless other programming languages and tools.

Let’s briefly look at some of the ways the regex library, which provides a template tag named regex, can help you write complex regexes that are actually understandable and maintainable by mortals. Note that all of the new syntax described below works identically in PCRE.

Insignificant Whitespace and Comments

By default, regex allows you to freely add whitespace and line comments (starting with #) to your regexes for readability.

import {regex} from 'regex';
const date = regex`
  # Match a date in YYYY-MM-DD format
  (?<year>  \d{4}) - # Year part
  (?<month> \d{2}) - # Month part
  (?<day>   \d{2})   # Day part
`;

This is equivalent to using PCRE’s xx flag.

Subroutines and Subroutine Definition Groups

Subroutines are written as \g<name> (where name refers to a named group), and they treat the referenced group as an independent subpattern that they try to match at the current position. This enables subpattern composition and reuse, which improves readability and maintainability.

For example, the following regex matches an IPv4 address such as “192.168.12.123”:

import {regex} from 'regex';
const ipv4 = regex`\b
  (?<byte> 25[0-5] | 2[0-4]\d | 1\d\d | [1-9]?\d)
  # Match the remaining 3 dot-separated bytes
  (\. \g<byte>){3}
\b`;

You can take this even further by defining subpatterns for use by reference only via subroutine definition groups. Here’s an example that improves the regex for admittance records that we saw earlier in this article:

const record = 'Admitted: 2024-01-01\nReleased: 2024-01-03';
const re = regex`
  ^ Admitted:\ (?<admitted> \g<date>) \n
    Released:\ (?<released> \g<date>) $

  (?(DEFINE)
    (?<date>  \g<year>-\g<month>-\g<day>)
    (?<year>  \d{4})
    (?<month> \d{2})
    (?<day>   \d{2})
  )
`;
const match = record.match(re);
console.log(match.groups);
/* → {
  admitted: '2024-01-01',
  released: '2024-01-03'
} */

A Modern Regex Baseline

regex includes the v flag by default, so you never forget to turn it on. And in environments without native v, it automatically switches to flag u while applying v’s escaping rules, so your regexes are forward and backward-compatible.

It also implicitly enables the emulated flags x (insignificant whitespace and comments) and n (“named capture only” mode) by default, so you don’t have to continually opt into their superior modes. And since it’s a raw string template tag, you don’t have to escape your backslashes \\\\ like with the RegExp constructor.

Atomic Groups and Possessive Quantifiers Can Prevent Catastrophic Backtracking

Atomic groups and possessive quantifiers are another powerful set of features added by the regex library. Although they’re primarily about performance and resilience against catastrophic backtracking (also known as ReDoS or “regular expression denial of service,” a serious issue where certain regexes can take forever when searching particular, not-quite-matching strings), they can also help with readability by allowing you to write simpler patterns.

Note: You can learn more in the regex documentation.

What’s Next? Upcoming JavaScript Regex Improvements

There are a variety of active proposals for improving regexes in JavaScript. Below, we’ll look at the three that are well on their way to being included in future editions of the language.

Duplicate Named Capturing Groups

This is a Stage 3 (nearly finalized) proposal. Even better is that, as of recently, it works in all major browsers.

When named capturing was first introduced, it required that all (?<name>...) captures use unique names. However, there are cases when you have multiple alternate paths through a regex, and it would simplify your code to reuse the same group names in each alternative.

For example:

/(?<year>\d{4})-\d\d|\d\d-(?<year>\d{4})/

This proposal enables exactly this, preventing a “duplicate capture group name” error with this example. Note that names must still be unique within each alternative path.

Pattern Modifiers (aka Flag Groups)

This is another Stage 3 proposal. It’s already supported in Chrome/Edge 125 and Opera 111, and it’s coming soon for Firefox. No word yet on Safari.

Pattern modifiers use (?ims:...), (?-ims:...), or (?im-s:...) to turn the flags i, m, and s on or off for only certain parts of a regex.

For example:

/hello-(?i:world)/
// Matches 'hello-WORLD' but not 'HELLO-WORLD'

Escape Regex Special Characters with RegExp.escape

This proposal recently reached Stage 3 and has been a long time coming. It isn’t yet supported in any major browsers. The proposal does what it says on the tin, providing the function RegExp.escape(str), which returns the string with all regex special characters escaped so you can match them literally.

If you need this functionality today, the most widely-used package (with more than 500 million monthly npm downloads) is escape-string-regexp, an ultra-lightweight, single-purpose utility that does minimal escaping. That’s great for most cases, but if you need assurance that your escaped string can safely be used at any arbitrary position within a regex, escape-string-regexp recommends the regex library that we’ve already looked at in this article. The regex library uses interpolation to escape embedded strings in a context-aware way.

Conclusion

So there you have it: the past, present, and future of JavaScript regular expressions.

If you want to journey even deeper into the lands of regex, check out Awesome Regex for a list of the best regex testers, tutorials, libraries, and other resources. And for a fun regex crossword puzzle, try your hand at regexle.

May your parsing be prosperous and your regexes be readable.

The AI Bubble Might Burst Soon – And That’s a Good Thing

Featured Imgs 23

Almost two years into the AI hype, a looming market correction may soon separate true innovators from those who are trying to capitalize on the hype. The burst of the bubble could pave the way for a more mature phase of AI development.

ai-bubble.jpg

Amidst recent turmoil on the stock markets, during which the 7 biggest tech companies collectively lost some $650 billion, experts and media alike are warning that the next tech bubble is about to pop (e.g.: The Guardian, Cointelegraph, The Byte). The AI industry has indeed been riding a wave of unprecedented hype and investment, with inflated expectations potentially setting up investors and CEOs for a rude awakening. However, the bursting of a bubble often has a cleansing effect, separating the wheat from the chaff. This article examines the current state of the AI industry, exploring both the signs that point to an imminent burst and the factors that suggest continued growth.

Why the Bubble Must Burst

Since the release of ChatGPT started a mainstream hype around AI, it looks like investors jumped at the opportunity to put their money into AI-related projects. Billions have been spent on them this year alone, and analysts expect AI to become a $1 trillion industry within the next 4-5 years. OpenAI alone is currently valued at $80 billion, which is almost twice the valuation of General Motors, or four times that of Western Digital. The list of other AI companies with high valuations has been growing quickly, as has the list of failed AI startups. At the same time, the progress visible to end-users has slowed down, and the hype around AI has been overshadowed by an endless string of PR disasters.

Here are three key reasons why the AI bubble might pop soon:

  1. AI doesnt sell. A study led by researchers of Washington State University revealed that using 'artificial intelligence' in product descriptions decreases purchase likelihood. This effect most likely stems from the emotional trust people typically associate with human interaction. AI distrust might have been further fueled by various PR disasters ranging from lying chatbots to discriminatory algorithms and wasteful public spending on insubstantial projects.
  2. AI investments aren't paying off. Most AI companies remain unprofitable and lack clear paths to profitability. For instance, OpenAI received a $13 billion investment from Microsoft for a 49% stake. Yet OpenAI's estimated annual revenue from 8.9 million subscribers is just $2.5 billion. Even with minimal operational costs (which isn't the case), Microsoft faces a long road to recouping its investment, let alone profiting.
  3. Regulation is hampering progress. End-users have seen little tangible improvement in AI applications over the past year. While video generation has advanced, ChatGPT and other LLMs have become less useful despite boasting higher model numbers and larger training data. A multitude of restrictions aimed, for example, at copyright protection, preventing misuse, and ensuring inoffensiveness have led to a "dumbification of LLMs." This has created a noticeable gap between hype and reality. Nevertheless, AI companies continue hyping minor updates and little new features that fail to meet expectations.

It's also crucial to remember that technology adoption takes time. Despite ChatGPT's record-breaking user growth, it still lags behind Netflix by about 100 million users, and has only about 3.5% of Netflix's paid subscribers. Consider that it took 30 years for half the world's population to get online after the World Wide Web's birth in 1989. Even today, 37% globally (and 9-12% in the US and Europe) don't use the internet. Realistically, AI's full integration into our lives will take considerable time. The burst of economic bubbles is much more likely to occur before that.

The Thing About Bubbles

A potential counter-argument to the thesis that AI development is slowing down, lacks application value and will struggle to expand its userbase, is that some big players might be hiding groundbreaking developments, which they could pull out of their metaphorical hats any moment. Speculations about much better models or even AGI lurking on OpenAI's internal testing network are nothing new. And indeed it is a fact that tech that is being developed usually surpasses the capabilities of tech that has already been thoroughly tested and released such is the nature of development. While AI development certainly might have the one or the other surprise in stock, and new applications arise all the time, it is questionable if there's a wildcard that can counteract an overheated market and hastily made investments in the billions. So anyone who's invested into AI-related stocks might want to buckle up, as turbulent quarters are likely to be ahead.

Now forget your investment portfolio and think about progress. Here's why a bursting AI bubble might actually benefit the industry:

The thing about bubbles is, they don't say much about the real-life value of a new technology. Sure, the bursting of a bubble might show that a useless thing is useless, as was the case with NFTs, which got hyped up and then quickly lost their "value" (NFTs really were the tulip mania of the digital age). But the bursting of bubbles also does not render a useful thing useless. There are many good examples for this:

  • During the .com-bubble of the late 1990s countless companies boasting little more than a registered domain name were drastically overvalued and when the bubble did burst their stock became worthless from one day to another. Yet, .com-services are not only still around, they have become the driving force behind the economy.
  • The bursting of the crypto bubble in early 2018 blasted many shitcoins into oblivion, but Bitcoin is still standing and not far off its all-time high. Also, blockchain tech is already applied in many areas other than finance e.g. in supply-chain management.
  • The crash of the housing market in 2007 worked a little differently, as it was not a tech-bubble. Property was hopelessly overvalued and people couldn't keep up with rising interest rates. The bursting of the bubble exposed a dire reality of financial markets where investors bet on whether you will be able to pay your mortgage or not. And today? Well, take a look at the chart on average, housing in the US costs almost twice as much now as it did at the height of the bubble of 2007. Even when adjusted for inflation, buying a house is now more expensive than ever before.

In case of the housing market, the bursting of the bubble had the effect that mortgages became more difficult to access and financial speculations became a little more regulated. In the case of the .com- and crypto-bubble, however, the burst had a cleansing effect that drove away the fakes and shillers, and left the fraction of projects alive that were actually on to something. It can be suspected that a bursting of the AI bubble would have a similar effect.

While the prospect of an AI bubble burst may cause short-term market turbulence, it could ultimately prove beneficial for the industry's long-term health and innovation. A market correction would likely weed out ventures that lack substance and redirect focus towards applications with real-world impact.

Investors, developers, and users alike should view this potential reset not as an end, but as a new beginning. The AI revolution is far from over it's entering a more mature, pragmatic phase.

Compatibility Issues with SAS Controllers and Host Bus Adapters (HBA) in My

Featured Imgs 23

Hello DaniWeb community,

I'm currently working on a server build and have run into some issues with my SAS controllers and Host Bus Adapters (HBA). Heres the situation:

I'm using a [specific server motherboard model] with [specific CPU model] and [specific amount of RAM], and I've installed a [specific model] SAS controller. The goal is to connect several high-capacity SAS drives for storage, but I'm experiencing unexpected behavior. The system sometimes fails to recognize the drives during boot, and even when it does, the drives are intermittently dropping out during operation.

Ive tried updating the firmware on the SAS controller and the HBA, and Ive also tested different cables and power supplies, but the issue persists. Could this be a compatibility issue between the SAS controller and the HBA? Or is there something else I might be overlooking?

Any suggestions or insights would be greatly appreciated, especially if youve had similar experiences with SAS controllers and HBAs.

Thanks in advance!

Why WordPress: The Right Choice for Everyone

Featured Imgs 23

Choosing the right platform to build an online presence is more crucial than ever. With nearly 20 years of experience working with WordPress, I’ve witnessed countless trends rise and fall. Yet, one thing has always stood out: the importance of selecting a website solution that aligns with long-term goals, offers scalability, and ensures complete control.

The post Why WordPress: The Right Choice for Everyone appeared first on WP Engine.

GPT-4o Snapshot vs Meta Llama 3.1 70b for Zero-Shot Text Summarization

Featured Imgs 23

In a previous article, I compared GPT-4o mini vs. GPT-4o and GPT-3.5 Turbo for zero-shot text summarization. The results showed that the GPT-4o mini achieves almost similar performance for zero-shot text classification at a much-reduced price compared to the other models.

I will compare Meta Llama 3.1 70b with OpenAI GPT-4o snapshot for zero-shot text summarization in this article. Meta Llama 3.1 series consists of Meta's state-of-the-art LLMs, including Llama 3.1 8b, Llama 3.1 70b, and Llama 3.1 405b. On the other hand, [OpenAI GPT-4o[(https://platform.openai.com/docs/models)] snapshot is OpenAIs latest LLM. We will use the Groq API to access Meta Llama 3.1 70b and the OpenAI API to access GPT-4o snapshot model.

So, let's begin without ado.

Installing and Importing Required Libraries

The following script installs the Python libraries you will need to run scripts in this article.


!pip install openai
!pip install groq
!pip install rouge-score
!pip install --upgrade openpyxl
!pip install pandas openpyxl

The script below installs the required libraries into your Python application.


import os
import time
import pandas as pd
from rouge_score import rouge_scorer
from openai import OpenAI
from groq import Groq
Importing the Dataset

This article will summarize the text in the News Articles with Summary dataset. The dataset consists of article content and human-generated summaries.

The following script imports the CSV dataset file into a Pandas DataFrame.


# Kaggle dataset download link
# https://github.com/reddzzz/DataScience_FP/blob/main/dataset.xlsx


dataset = pd.read_excel(r"D:\Datasets\dataset.xlsx")
dataset = dataset.sample(frac=1)
dataset['summary_length'] = dataset['human_summary'].apply(len)
average_length = dataset['summary_length'].mean()
print(f"Average length of summaries: {average_length:.2f} characters")
print(dataset.shape)
dataset.head()

Output:

image1.png

The content column stores the article's text, and the human_summary column contains the corresponding human-generated summaries.

We also calculate the average number of characters in the human-generated summaries, which we will use to generate summaries via the LLM models.

Text Summarization with GPT-4o Snapshot

We are now ready to summarize articles using GPT-4o snapshot and Llama 3.1 70b.

First, we'll create an instance of the OpenAI class, which we'll use to interact with various OpenAI language models. When initializing this object, you must provide your OpenAI API Key.

Additionally, we'll define the calculate_rouge() function, which computes the ROUGE-1, ROUGE-2, and ROUGE-L scores by comparing the LLM-generated summaries with the human-generated ones.

ROUGE scores are used to evaluate the quality of machine-generated text, such as summaries, by comparing them with human-generated text. ROUGE-1 evaluates the overlap of unigrams (single words), ROUGE-2 considers bigrams (pairs of consecutive words), and ROUGE-L focuses on the longest common subsequence between the two texts.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)

# Function to calculate ROUGE scores
def calculate_rouge(reference, candidate):
    scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'], use_stemmer=True)
    scores = scorer.score(reference, candidate)
    return {key: value.fmeasure for key, value in scores.items()}

Next, we will iterate through the first 20 articles in the dataset and call the GPT-4o snapshot model to produce a summary of the article with a target length of 1150 characters. We will use 1150 characters because the average length of the human-generated summaries is 1168 characters. Next, the LLM-generated and human-generated summaries are passed to the calculate_rouge() function, which returns ROUGE scores for the LLM-generated summaries. These ROUGE scores, along with the generated summaries, are stored in the results list.


%%time

results = []

i = 0

for _, row in dataset[:20].iterrows():
    article = row['content']
    human_summary = row['human_summary']

    i = i + 1
    print(f"Summarizing article {i}.")

    prompt = f"Summarize the following article in 1150 characters. The summary should look like human created:\n\n{article}\n\nSummary:"

    response = client.chat.completions.create(
        model= "gpt-4o-2024-08-06",
        messages=[{"role": "user", "content": prompt}],
        max_tokens=1150,
        temperature=0.7
    )
    generated_summary = response.choices[0].message.content
    rouge_scores = calculate_rouge(human_summary, generated_summary)

    results.append({
    'article_id': row.id,
    'generated_summary': generated_summary,
    'rouge1': rouge_scores['rouge1'],
    'rouge2': rouge_scores['rouge2'],
    'rougeL': rouge_scores['rougeL']
    })

Output:

image2.png

The above output shows that it took 59 seconds to summarize 20 articles.

Next, we convert the results list into a results_df dataframe and display the average ROUGE scores for 20 articles.


results_df = pd.DataFrame(results)
mean_values = results_df[["rouge1", "rouge2", "rougeL"]].mean()
print(mean_values)

Output:


rouge1    0.386724
rouge2    0.100371
rougeL    0.187491
dtype: float64

The above results show that ROUGE scores obtained by GPT-4o snapshot are slightly less than the results obtained by GPT-4o model in the previous article.

Let's evaluate the summarization of GPT-4o using another LLM, GPT-4o mini, in this case.

In the following script, we define the llm_evaluate_summary() function, which accepts the original article and LLM-generated summary and evaluates it on the completeness, conciseness, and coherence criteria.


def llm_evaluate_summary(article, summary):
    prompt = f"""Evaluate the following summary for the given article. Rate it on a scale of 1-10 for:
    1. Completeness: Does it capture all key points?
    2. Conciseness: Is it brief and to the point?
    3. Coherence: Is it well-structured and easy to understand?

    Article: {article}

    Summary: {summary}

    Provide the ratings as a comma-separated list (completeness,conciseness,coherence).
    """
    response = client.chat.completions.create(
        model= "gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
        max_tokens=100,
        temperature=0.7
    )
    return [float(score) for score in response.choices[0].message.content.strip().split(',')]

We iterate through the first 20 articles and pass the content and LLM-generated summaries to the llm_evaluate_summary() function.


scores_dict = {'completeness': [], 'conciseness': [], 'coherence': []}

i = 0
for _, row in results_df.iterrows():
    i = i + 1
    # Corrected method to access content by article_id
    article = dataset.loc[dataset['id'] == row['article_id'], 'content'].iloc[0]
    scores = llm_evaluate_summary(article, row['generated_summary'])
    print(f"Article ID: {row['article_id']}, Scores: {scores}")

    # Store the scores in the dictionary
    scores_dict['completeness'].append(scores[0])
    scores_dict['conciseness'].append(scores[1])
    scores_dict['coherence'].append(scores[2])

Finally, the script below calculates and displays the average scores for completeness, conciseness, and coherence for GPT-4o snapshot summaries.


# Calculate the average scores
average_scores = {
    'completeness': sum(scores_dict['completeness']) / len(scores_dict['completeness']),
    'conciseness': sum(scores_dict['conciseness']) / len(scores_dict['conciseness']),
    'coherence': sum(scores_dict['coherence']) / len(scores_dict['coherence']),
}

# Convert to DataFrame for better visualization (optional)
average_scores_df = pd.DataFrame([average_scores])
average_scores_df.columns = ['Completeness', 'Conciseness', 'Coherence']

# Display the DataFrame
average_scores_df.head()

Output:

image3.png

Text Summarization with Llama 3.1 70b

In this section, we will perform a zero-shot text summarization of the same set of articles using the Llama 3.1 70b model.

You should try the Llama 3.1 405b model to get better results. However, at the time of writing this article, Groq Cloud had suspended the API calls for Llama 405b due to excessive demand. You can also try other cloud providers to run Llama 3.1 405b.

The process remains the same for text summarization using Meta Llama 3.1 70b. The only difference is that we will create an object of the Groq client in this case.


client = Groq(
    api_key=os.environ.get("GROQ_API_KEY"),
)

Next, we will iterate through the first 20 articles in the dataset, generate their summaries using the Llama 3.1 70b model, calculate ROUGE scores, and store the results in the results list.


%%time

results = []

i = 0

for _, row in dataset[:20].iterrows():
    article = row['content']
    human_summary = row['human_summary']

    i = i + 1
    print(f"Summarizing article {i}.")

    prompt = f"Summarize the following article in 1150 characters. The summary should look like human created:\n\n{article}\n\nSummary:"

    response = client.chat.completions.create(
          model="llama-3.1-70b-versatile",
          temperature = 0.7,
          max_tokens = 1150,
          messages=[
                {"role": "user", "content":  prompt}
            ]
    )

    generated_summary = response.choices[0].message.content
    rouge_scores = calculate_rouge(human_summary, generated_summary)

    results.append({
    'article_id': row.id,
    'generated_summary': generated_summary,
    'rouge1': rouge_scores['rouge1'],
    'rouge2': rouge_scores['rouge2'],
    'rougeL': rouge_scores['rougeL']
    })

Output:

image4.png

The above output shows that it only took 24 seconds to process 20 article using Llama 3.1 70b. The faster processing is because Llama 3.1 70b is a smaller model than the GPT-4o snapshot. Also, Groq uses LPU (language processing unit) which is much faster for LLM inference.

Next, we will convert the results list into the results_df dataframe and display the average ROUGE scores.


results_df = pd.DataFrame(results)
mean_values = results_df[["rouge1", "rouge2", "rougeL"]].mean()
print(mean_values)

Output:


rouge1    0.335863
rouge2    0.080865
rougeL    0.170834
dtype: float64

The above output shows that the ROUGE scores for Meta Llama 3.1 70b are lower compared to the GPT-40 snapshot model. I would again stress that you should use Llama 3.1 405b to get better results.

Finally, we will evaluate the summaries generated via Llama 3.1 70b using the GPT-4o mini model for completeness, conciseness, and coherence.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)

scores_dict = {'completeness': [], 'conciseness': [], 'coherence': []}

i = 0
for _, row in results_df.iterrows():
    i = i + 1
    # Corrected method to access content by article_id
    article = dataset.loc[dataset['id'] == row['article_id'], 'content'].iloc[0]
    scores = llm_evaluate_summary(article, row['generated_summary'])
    print(f"Article ID: {row['article_id']}, Scores: {scores}")

    # Store the scores in the dictionary
    scores_dict['completeness'].append(scores[0])
    scores_dict['conciseness'].append(scores[1])
    scores_dict['coherence'].append(scores[2])

# Calculate the average scores
average_scores = {
    'completeness': sum(scores_dict['completeness']) / len(scores_dict['completeness']),
    'conciseness': sum(scores_dict['conciseness']) / len(scores_dict['conciseness']),
    'coherence': sum(scores_dict['coherence']) / len(scores_dict['coherence']),
}

# Convert to DataFrame for better visualization (optional)
average_scores_df = pd.DataFrame([average_scores])
average_scores_df.columns = ['Completeness', 'Conciseness', 'Coherence']

# Display the DataFrame
average_scores_df.head()

Output:

image5.png

The above output shows that Llama 3.1 70b achieves performance similar to GPT-4o snapshot for text summarization when evaluated using 3rd LLM.

Conclusion

Meta Llama 3.1 series models are state-of-the-art open-source models. This article shows that Meta Llama 3.1 70b performs very similarly to GPT-4o snapshot for zero-shot text summarization. I stress that you use the Llama 3.1 405b model from Groq to see if you can get better results than GPT-4o.

Pricing Projects As A Freelancer Or Agency Owner

Featured Imgs 23

Pricing projects can be one of the most challenging aspects of running a digital agency or working as a freelance web designer. It’s a topic that comes up frequently in discussions with fellow professionals in my Agency Academy.

Three Approaches to Pricing

Over my years in the industry, I’ve found that there are essentially three main approaches to pricing:

  • Fixed price,
  • Time and materials,
  • And value-based pricing.

Each has its merits and drawbacks, and understanding these can help you make better decisions for your business. Let’s explore each of these in detail and then dive into what I believe is the most effective strategy.

Fixed Price

Fixed pricing is often favored by clients because it reduces their risk and allows for easier comparison between competing proposals. On the surface, it seems straightforward: you quote a price, the client agrees, and you deliver the project for that amount. However, this approach comes with significant drawbacks for agencies and freelancers:

  • Estimating accurately is incredibly challenging.
    In the early stages of a project, we often don’t have enough information to provide a precise quote. Clients may not have a clear idea of their requirements, or there might be technical complexities that only become apparent once work begins. This lack of clarity can lead to underquoting, which eats into your profits, or overquoting, which might cost you the job.
  • There’s no room for adaptation based on testing or insights gained during the project.
    Web design and development is an iterative process. As we build and test, we often discover better ways to implement features or uncover user needs that weren’t initially apparent. With a fixed price model, these improvements are often seen as “scope creep” and can lead to difficult conversations with clients about additional costs.
  • The focus shifts from delivering the best possible product to sticking within the agreed-upon scope.
    This can result in missed opportunities for innovation and improvement, ultimately leading to a less satisfactory end product for the client.

While fixed pricing might seem straightforward, it’s not without its complications. The rigidity of this model can stifle creativity and adaptability, two crucial elements in successful web projects. So, let’s look at an alternative approach that offers more flexibility.

Time and Materials

Time and materials (T&M) pricing offers a fairer system where the client only pays for the hours actually worked. This approach has several advantages:

  • Allows for greater adaptability as the project progresses. If new requirements emerge or if certain tasks take longer than expected, you can simply bill for the additional time. This flexibility can lead to better outcomes as you’re not constrained by an initial estimate.
  • Encourages transparency and open communication. Clients can see exactly what they’re paying for, which can foster trust and understanding of the work involved.
  • Reduces the risk of underquoting. You don’t have to worry about eating into your profits if a task takes longer than expected.

However, T&M pricing isn’t without its drawbacks:

  • It carries a higher perceived risk for the client, as the final cost isn’t determined upfront. This can make budgeting difficult for clients and may cause anxiety about runaway costs.
  • It requires careful tracking and regular communication about hours spent. Without this, clients may be surprised by the final bill, leading to disputes.
  • Some clients may feel it incentivizes inefficiency, as taking longer on tasks results in higher bills.

T&M pricing can work well in many scenarios, especially for long-term or complex projects where requirements may evolve. However, it’s not always the perfect solution, particularly for clients with strict budgets or those who prefer more certainty. There’s one more pricing model that’s often discussed in the industry, which attempts to tie pricing directly to results.

Value-Based Pricing

Value-based pricing is often touted as the holy grail of pricing strategies. The idea is to base your price on the value your work will generate for the client rather than on the time it takes or a fixed estimate. While this sounds great in theory, it’s rarely a realistic approach in our industry. Here’s why:

  • It’s only suitable for projects where you can tie your efforts directly to ROI (Return on Investment). For example, if you’re redesigning an e-commerce site, you might be able to link your work to increased sales. However, for many web projects, the value is more intangible or indirect.
  • Accurately calculating ROI is often difficult or impossible in web design and development. Many factors contribute to a website’s success, and isolating the impact of design or development work can be challenging.
  • It requires a deep understanding of the client’s business and industry. Without this, it’s hard to accurately assess the potential value of your work.
  • Clients may be reluctant to share the financial information necessary to make value-based pricing work. They might see it as sensitive data or simply may not have accurate projections.
  • It can lead to difficult conversations if the projected value isn’t realized. Was it due to your work or other factors beyond your control?

While these three approaches form the foundation of most pricing strategies, the reality of pricing projects is often more nuanced and complex. In fact, as I point out in my article “How To Work Out What To Charge Clients: The Honest Version”, pricing often involves a mix of educated guesswork, personal interest in the project, and an assessment of what the market will bear.

Given the challenges with each of these pricing models, you might be wondering if there’s a better way. In fact, there is, and it starts with a different approach to the initial client conversation.

Start by Discussing Appetite

Instead of jumping straight into deliverables or hourly rates, I’ve found it more effective to start by discussing what 37signals calls “appetite” in their book Shaping Up. Appetite is how much the product owner is willing to invest based on the expected return for their business. This concept shifts the conversation from “What will this cost?” to “What is this worth to you?”

This approach is beneficial for several reasons:

  • Focuses on the budget rather than trying to nail down every deliverable upfront. This allows for more flexibility in how that budget is allocated as the project progresses.
  • Allows you to tailor your proposal to what the client can actually afford. There’s no point in proposing a $100,000 solution if the client only has $20,000 to spend.
  • Helps set realistic expectations from the start. If a client’s appetite doesn’t align with what’s required to meet their goals, you can have that conversation early before investing time in detailed proposals.
  • Shifts the conversation from price comparison to value delivery. Instead of competing solely on price, you’re discussing how to maximize the value of the client’s investment.
  • Mirrors how real estate agents work — they ask for your budget to determine what kind of properties to show you. This analogy can help clients understand why discussing budgets early is crucial.

To introduce this concept to clients, I often use the real estate analogy. I explain that even if you describe your ideal house (e.g., 3 bedrooms, specific location), a real estate agent still cannot give you a price because it depends on many other factors, including the state of repair and nearby facilities that may impact value. Similarly, in web design and development, many factors beyond the basic requirements affect the final cost and value of a project.

Once you’ve established the client’s appetite, you’re in a much better position to structure your pricing. But how exactly should you do that? Let me share a strategy that’s worked well for me and many others in my Agency Academy.

Improve Your Estimates With Sub-Projects

Here’s an approach I’ve found highly effective:

  1. Take approximately 10% of the total budget for a discovery phase. This can be a separate contract with a fixed price. During this phase, you dig deep into the client’s needs, goals, and constraints. You might conduct user research, analyze competitors, and start mapping out the project’s architecture.
  2. Use the discovery phase to define what needs to be prototyped, allowing you to produce a fixed price for the prototyping sub-project. This phase might involve creating wireframes, mockups, or even a basic working prototype of key features.
  3. Test and evolve the prototype, using it as a functional specification for the build. This detailed specification allows you to quote the build accurately. By this point, you have a much clearer picture of what needs to be built, reducing the risk of unexpected complications.

This approach combines elements of fixed pricing (for each sub-project) with the flexibility to adapt between phases. It allows you to provide more accurate estimates while still maintaining the ability to pivot based on what you learn along the way.

Advantages of the Sub-Project Approach

This method offers several key benefits:

  • Clients appreciate the sense of control over the budget. They can decide after each phase whether to continue, giving them clear exit points if needed.
  • It reduces the perceived risk for clients, as they could theoretically change suppliers between sub-projects. This makes you a less risky option compared to agencies asking for a commitment to the entire project upfront.
  • Each sub-project is easier to price accurately. As you progress, you gain more information, allowing for increasingly precise estimates.
  • It allows for adaptability between sub-projects, eliminating the problem of scope creep. If new requirements emerge during one phase, they can be incorporated into the planning and pricing of the next phase.
  • It encourages ongoing communication and collaboration with the client. Regular check-ins and approvals are built into the process.
  • It aligns with agile methodologies, allowing for iterative development and continuous improvement.

This sub-project approach not only helps with more accurate pricing but also addresses one of the most common challenges in project management: scope creep. By breaking the project into phases, you create natural points for reassessment and adjustment. For a more detailed look at how this approach can help manage scope creep, check out my article “How To Price Projects And Manage Scope Creep.”

This approach sounds great in theory, but you might be wondering how clients typically react to it. Let’s address some common objections and how to handle them.

Dealing with Client Objections

You may encounter resistance to this approach, especially in formal bid processes where clients are used to receiving comprehensive fixed-price quotes. Here’s how to handle common objections:

“We need a fixed price for the entire project.”

Provide an overall estimate based on their initial scope, but emphasize that this is a rough figure. Use your sub-project process as a selling point, explaining how it actually provides more accurate pricing and better results. Highlight how inaccurate other agency quotes are likely to be and warn about potential scope discussions later.

“This seems more complicated than other proposals we've received.”

Acknowledge that it may seem more complex initially, but explain how this approach actually simplifies the process in the long run. Emphasize that it reduces risk and increases the likelihood of a successful outcome.

“We don't have time for all these phases.”

Explain that while it may seem like more steps, this approach often leads to faster overall delivery because it reduces rework and ensures everyone is aligned at each stage.

“How do we compare your proposal to others if you’re not giving us a fixed price?”

Emphasize that the quality and implementation of what agencies quote for can vary wildly. Your approach ensures they get exactly what they need, not just what they think they want at the outset. Encourage them to consider the long-term value and reduced risk, not just the initial price tag.

“We’re not comfortable discussing our budget upfront.”

Use the real estate analogy to explain why discussing the budget upfront is crucial. Just as a real estate agent needs to know your budget to show you appropriate properties, you need to understand their investment appetite to propose suitable solutions.

By adopting this approach to pricing, you can create a more collaborative relationship with your clients, reduce the risk for both parties, and ultimately deliver better results.

Remember,

Pricing isn’t just about numbers — it’s about setting the foundation for a successful project and a positive client relationship.

By being transparent about your process and focusing on delivering value within the client’s budget, you’ll set yourself apart in a crowded market.

Comparison of Fine-tuning GPT-4o mini vs GPT-3.5 for Text Classification

Featured Imgs 23

In my previous articles, I presented a comparison of OpenAI GPT-4o mini model with GPT-4o and GPT-3.5 turbo models for zero-shot text classification. The results showed that GPT-4o mini, while significantly cheaper than its counterparts, achieves comparable performance.

On 8 August 2024, OpenAI enabled GPT-4o mini fine-tuning for developers across usage tiers 1-5. You can now fine-tune GPT-4o mini for free until 23 September 2024, with a daily token limit of 2 million.

In this article, I will show you how to fine-tune the GPT-4o mini for text classification tasks and compare it to the fine-tuned GPT-3.5 turbo.

So, let's begin without ado.

Importing and Installing Required Libraries

The following script installs the OpenAI Python library you can use to make calls to the OpenAI API.


!pip install openai

The script below imports the required liberaries into your Python application.


from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from openai import OpenAI
import pandas as pd
import os
import json
Importing the Dataset

We will use the Twitter US Airline Sentiment dataset for fine-tuning the GPT-4o mini and GPT-3.5 turbo models.

The following script imports the dataset and defines the preprocess_data() function. This function takes in a dataset and an index value as inputs. It then divides the dataset by sentiment category, returning 34, 33, and 33 tweets from each category, beginning at the specified index. This approach ensures we have around 100 balanced records. You can use more number of records for fine-tuning if you want.



dataset = pd.read_csv(r"D:\Datasets\Tweets.csv")

def preprocess_data(dataset, n):

    # Remove rows where 'airline_sentiment' or 'text' are NaN
    dataset = dataset.dropna(subset=['airline_sentiment', 'text'])

    # Remove rows where 'airline_sentiment' or 'text' are empty strings
    dataset = dataset[(dataset['airline_sentiment'].str.strip() != '') & (dataset['text'].str.strip() != '')]

    # Filter the DataFrame for each sentiment
    neutral_df = dataset[dataset['airline_sentiment'] == 'neutral']
    positive_df = dataset[dataset['airline_sentiment'] == 'positive']
    negative_df = dataset[dataset['airline_sentiment'] == 'negative']

    # Select records from Nth index
    neutral_sample = neutral_df[n: n +34]
    positive_sample = positive_df[n: n +33]
    negative_sample = negative_df[n: n +33]

    # Concatenate the samples into one DataFrame
    dataset = pd.concat([neutral_sample, positive_sample, negative_sample])

    # Reset index if needed
    dataset.reset_index(drop=True, inplace=True)

    dataset = dataset[["text", "airline_sentiment"]]

    return dataset

The following script creates a balanced training dataset.


training_data = preprocess_data(dataset, 0)
print(training_data["airline_sentiment"].value_counts())
training_data.head()

Output:

image1.png

Similarly, the script below creates a test dataset.


test_data = preprocess_data(dataset, 100)
print(test_data["airline_sentiment"].value_counts())
test_data.head()

Output:

image2.png

Converting Training Data to JSON Format for OpenAI Model Fine-tuning

To fine-tune an OpenAI model, you need to transform the training data into JSON format, as outlined in the OpenAI official documentation. To achieve this, I have written a straightforward function that converts the input Pandas DataFrame into the required JSON structure.

The following script converts the training data into OpenAI complaint JSON format for fine-tuning. Fine-tuning relies significantly on the content specified for the system role, so pay special attention when setting this value.


# JSON file path
json_file_path = r"D:\Datasets\airline_sentiments.json"

# Function to create the JSON structure for each row
def create_json_structure(row):
    return {
        "messages": [
            {"role": "system", "content": "You are a Twitter sentiment analysis expert who can predict sentiment expressed in the tweets about an airline. You select sentiment value from positive, negative, or neutral."},
            {"role": "user", "content": row['text']},
            {"role": "assistant", "content": row['airline_sentiment']}
        ]
    }

# Convert DataFrame to JSON structures
json_structures = training_data.apply(create_json_structure, axis=1).tolist()

# Write JSON structures to file, each on a new line
with open(json_file_path, 'w') as f:
    for json_structure in json_structures:
        f.write(json.dumps(json_structure) + '\n')

print(f"Data has been written to {json_file_path}")

Output:


Data has been written to D:\Datasets\airline_sentiments.json

The next step is to upload your JSON file to the OpenAI server. To do so, start by creating an OpenAI client object. Then, call the files.create() method, passing the file path as an argument, as demonstrated in the following script:

client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)


training_file = client.files.create(
  file=open(json_file_path, "rb"),
  purpose="fine-tune"
)

print(training_file.id)

Once the file is uploaded, you will receive a file ID, as the above script demonstrates. You will use this file ID to fine-tune your OpenAI model.

Fine-Tuning GPT-4o Mini for Text Classification

To start fine-tuning, you must call the fine_tuning.jobs.create() method and pass it the ID of the uploaded training file and the model name. The current model name for GPT-4o mini is gpt-4o-mini-2024-07-18.


fine_tuning_job_gpt4o_mini = client.fine_tuning.jobs.create(
  training_file=training_file.id,
  model="gpt-4o-mini-2024-07-18"
)

Executing the above script initiates the fine-tuning process. The following script allows you to monitor and display various fine-tuning events.


# List up to 10 events from a fine-tuning job
print(client.fine_tuning.jobs.list_events(fine_tuning_job_id = fine_tuning_job_gpt4o_mini.id,
                                    limit=10))

Once fine-tuning is complete, you will receive an email containing the ID of your fine-tuned model, which you can use to make inferences. Alternatively, you can retrieve the ID of your fine-tuned model by running the following script.


ft_model_id = client.fine_tuning.jobs.retrieve(fine_tuning_job_gpt4o_mini.id).fine_tuned_model

The remainder of the process follows the same steps as outlined in a previous article. We will define the find_sentiment() function and pass it our fine-tuned model and the test set to predict the sentiment of the tweets in the dataset.

Finally, we predict the model's accuracy by comparing the actual and predicted sentiments of the tweets.


def find_sentiment(client, model, dataset):
    tweets_list = dataset["text"].tolist()

    all_sentiments = []


    i = 0


    while i < len(tweets_list):

        try:
            tweet = tweets_list[i]
            content = """What is the sentiment expressed in the following tweet about an airline?
            Select sentiment value from positive, negative, or neutral. Return only the sentiment value in small letters.
            tweet: {}""".format(tweet)

            response = client.chat.completions.create(
                model=model,
                temperature=0,
                max_tokens=10,
                messages=[
                    {"role": "user", "content": content}
                ]
            )

            sentiment_value = response.choices[0].message.content

            all_sentiments.append(sentiment_value)
            i += 1
            print(i, sentiment_value)

        except Exception as e:
            print("===================")
            print("Exception occurred:", e)

    accuracy = accuracy_score(all_sentiments, dataset["airline_sentiment"])
    print(f"Accuracy: {accuracy}")

find_sentiment(client,ft_model_id, test_data)

Output:


Accuracy: 0.78

The above output shows that the fine-tuned GPT-4o mini achieves a performance accuracy of 78% on the test set.

Fine-Tuning GPT-3.5 Turbo for Text Classification

For comparison, we will also fine-tune the GPT-3.5 turbo model for text classification.

The fine-tuning process remains the same as for the GPT-4o mini. We will pass the training file ID and the GPT-3.5 turbo model ID to the client.fine_tuning.jobs.create() method, as shown below.


fine_tuning_job_gpt_3_5 = client.fine_tuning.jobs.create(
  training_file=training_file.id,
  model="gpt-3.5-turbo"
)

Next, we will pass the fine-tuned GPT-3.5 model ID and the test dataset to the find_sentiment() function to evaluate the model's performance on the test set.


ft_model_id = client.fine_tuning.jobs.retrieve(fine_tuning_job_gpt_3_5.id).fine_tuned_model
find_sentiment(client,ft_model_id, test_data)

Output:


Accuracy: 0.82

The above output shows that the GPT-3.5 turbo model achieves 82% performance accuracy, 4% higher than the GPT-4o mini model.

Conclusion

GPT-4o mini is a cheaper and faster alternative to GPT-3.5. My last article showed that it achieves higher performance for zero-shot text classification than the GPT-3.5 turbo model.

However, based on the results presented in this article, a fine-tuned GPT-3.5 turbo model is still better than a fine-tuned GPT-4o mini.

Feel free to share your feedback in the comments section.

What Features Would Make a Keylogger Interesting?

Featured Imgs 23

Hey guys! I started a keylogger project two months back, and I've hit a slump relating to what features I can add to make it more complex and less basic.
So I just wanted to hear any suggestions anyone might have, so I find some inspiration.
Note: This project is for research purposes only and will not be used in an unethical manner.

The Future of Blog Monetization in 2024: Key Trends and Techniques to Maximize Your Earnings

Featured Imgs 23

Blogging has emerged as a dynamic and impactful channel for writers to express their ideas. They share their expertise and foster community as well. Strategic planning is very important if you want to stand out as a writer and generate revenue. It is especially crucial for creators and bloggers who aspire to turn their passion into a profitable venture.

Today, there are new trends in blog monetization that you must not overlook. This article is your ultimate guide to the current state of blog monetization in 2024. Not only will it provide you with valuable insights and strategies, but it will also help you unlock the full potential of the platforms you use. So, whether you’re an established content creator or an aspiring blogger looking to diversify your revenue streams, this article is a must-read.

Relevance of Blogging in 2024

Relevance of Blogging in 2024

Many people wonder if blogging is still relevant today, and the answer is yes! In fact, blogging is more important than ever. It provides a powerful medium for sharing information, connecting with audiences, promoting brands, and establishing authority in your niche. While video and social media marketing has become more popular, blogging still remains a fundamental and essential component of the online landscape. There are countless reasons why blogging is still relevant, including;

In-depth content and valuable insights

Blogging lets writers and content creators delve deep into various topics and provide valuable insights. Unlike creating content on social media, which is quite limited, blogs offer flexibility to explain complex concepts. Moreover, there is ample space for gaining an in-depth understanding of any subject matter.

Search Engine Optimization (SEO)

Blogs play a significant role in generating traffic to websites. They are a crucial part of every website as they will help it rank better. SEO favors websites with high-quality, relevant, and fresh content.

This feature is quite noticeable with e-commerce websites creating separate blog content pages.

Long-Term Value

Unlike some social media posts, blog posts have a longer shelf life. A well-written blog post can continue generating traffic and revenue long after publication.

Diverse content type

Blogs can incorporate diverse content types, including text, videos, images, infographics, and interactive elements. This versatility enables content creators to cater to different learning styles and broaden their audience reach.

Branding and marketing

It is known that a blog is an effective brand marketing strategy. It complements branding and provides a platform to communicate the brand’s values and mission. Blogs can as well be used to promote products and services.

Community building

Blogs provide a platform for building community relationships. Bloggers can create a safe space for readers to share their thoughts and opinions on the content through comments and discussion sessions.

Blog Content Strategies

Blog Content Strategies

Running a successful blog will require a carefully thought-out strategy aligning with your blog’s objectives. You should also consider the interest of your target audience and ensure that your blog content keeps them engaged. Here are some helpful strategies to consider;

Define your target audience

Before creating your content, you need to identify who this content is directed to. Who is your target audience? What do you aim to achieve? Identify their needs, preferences, and point points. After this, you can tailor your content to meet their needs and provide value.

Plan your content

One of the main challenges most bloggers face is consistency. You should always plan your content. Draft out a content calendar to help you plan better. Planning will help you maintain consistency and help you cover a wide variety of topics relevant to your audience. You should also include important dates and holidays in your calendar to help you capitalize on timely content.

Conduct in-depth keyword research

One effective way to optimize your blog for search engines is keyword research. Identify relevant keywords (ensure they align with your content) and incorporate them naturally in your blog post.

Offer valuable content

This strategy is the most important as no one will want to engage with boring or meaningless content. You should always focus on creating quality, unique, and valuable content. One tip is to provide information and insights that solve your audience’s problems. Everybody is on the lookout for solutions to their problems.

Incorporate visual elements

Include visual elements such as images, videos, and infographics to enhance the user experience. Visual elements help break long text forms and make your content more appealing.

Encourage reader engagement

Always encourage your audience interaction by inviting comments and discussions at the end of each post. Respond briefly to comments and questions, as this will foster a sense of community with your audience.

Monitor analytics

Keep a close eye on your analytics. This strategy will give you a clear understanding of which content resonates with your audience more. Analyze your page views, social shares, and bounce rates to refine your content strategically.

Blog Monetization Strategies In 2024

Blog Monetization Strategies In 2024

So far, this article has given you the importance of blogging and the strategies to run a successful blog. The next thing to tackle is blog monetization. In lay terms, blog monetization means generating revenue from a blog. By monetization, you aim to turn your blog into a source of income by utilizing various revenue streams.

The landscape of blog monetization is evolving rapidly. While some strategies have remained the same, some new methods have emerged. Some common blog monetization strategies include;

Affiliate and Referral Marketing

Affiliate marketing in 2024 remains the most effective way bloggers monetize their blogs. But how does it work? Usually, you receive a unique coupon code to share with your audience. When a reader clicks on the link and purchases the service, you receive a commission based on the reader’s activity on the platform. This strategy is an excellent form of marketing as you generate conversions and revenue.

This is also an attractive opportunity for bloggers in various niches as it appeals to a broad spectrum of readers. Another opportunity comes from referral marketing. This is when users refer the service or product to their contacts using the referral code. In the end, the referrer gets a bonus. Some services have referral programs where you can earn passive income and generate yourself yet another revenue stream.

It is recommended to incentivize your audience by crafting comprehensive reviews and tutorials that showcase mutual benefits, earnings potential, and positive impact. Sharing personal success stories about your experience with your affiliates can help build trust and encourage referrals, ultimately leading to tremendous success for all parties involved.

Advertising

Advertising is one of the most commonly used strategies for blog monetization. This process involves displaying ads on your blog and earning revenue based on various metrics. These metrics include; clicks, impressions, and conversions.

Unlike other monetization strategies, you don’t need to worry about creating or managing products. Your primary focus should be creating valuable content that aligns with the ads.

Display Ads

This type of advertising is a text-based or graphical ad that appears on your blog. Some standard display ads network includes; Google AdSense, Apple advertising, and Facebook audience network ads. These networks use conceptual strategies to display relevant ads based on your blog content.

Sponsored Content

Sponsored posts are another common type of advertisement. Sponsored ads, articles, and blog content are created based on brand collaboration. This content is usually promotional, but you should ensure that it still provides value to your audience. One main pro of sponsored posts is that you don’t have to deal with order fulfillment and customer support. The brands you are in collaboration with handle all the backend operations.

Digital Products

Digital products offer a profitable blog monetization strategy that allows bloggers to leverage their skills and creativity to generate revenue. These products are primarily downloadable materials and resources that provide value to your audience. Here are some popular types of digital products;

  • Online courses : You can create in-depth educational practices that can be delivered through audio, video, and written content. Your lessons could also have live elements like webinars.
  • Ebooks : You can also create written guides on topics relevant to your niche. Ensure that the book offers a problem-solving approach to your audience’s issues.
  • Software and apps : Another lucrative digital product is creating software tools and mobile apps that cater to your audience’s needs. Creative software tools are highly profitable as you promote your expertise while building a helpful option for your audience.
  • Templates and worksheets : You can create ready-to-use templates, worksheets, and checklists to help your audience with practical organizations and tasks.

Side hustles

Side hustles can complement your blog monetization strategies and provide an additional income stream. A side hustle will serve as a secondary source of income pursued alongside your main hustle, in this case, blogging. By diversifying your income stream, you set yourself up for success and financial stability. Here are good options for a side hustle;

Freelance Services

While blogging, you may have developed valuable skills such as graphic design, writing, video editing, social media management, and website development. You can monetize these skills by offering these services as a freelancer. You can promote your extra services on your blog or various freelance platforms and connect with clients who need assistance in relevant areas of your expertise.

However, freelancing doesn’t always require skills. There are multiple platforms that will offer you money in exchange for you to complete some small tasks, like filling in surveys.

Print-on demand merchandise

You can design print-on-demand merchandise and sell it to your dedicated audience. You can create branded mugs, t-shirts, pens, and stickers without holding inventory. This side hustle is a great way to promote your blog and services while earning extra income.

Coaching and consultation

You can offer coaching or consulting services if you have successfully built authority in your niche. Whether it’s personal development, fitness, finance or business coaching, or any specialized area offering one-on-one consulting services is a high-end monetization strategy.

Membership/Subscription

Membership Subscription

The membership/subscription is a blog monetization strategy where you offer premium and exclusive content, services, or products in exchange for a recurring fee. Your subscribers will gain access to premium benefits that regular readers don’t have access to. This method effectively generates extra revenue while maintaining a sense of community and loyalty among your audience. Here are tips to successfully utilize this strategy;

  • Offer exclusive content and value : One core element for a successful membership/subscription model is to offer exclusive content. This content may be in the form of long-form articles, premium videos, guides, tutorials, templates, or any other resources that can cater to your audience’s needs.
  • Offer tiered offers and pricing : Another essential element is to offer tiered pricing with varying levels of benefits. This offer allows readers to choose a subscription that suits their budget and needs. Additionally, you can include discounts, early access sign-ups, and special offers for your subscriptions.
  • Offer free trial and sampling : By offering free trial and samples, you can attract potential subscribers by showcasing the value they will receive if they subscribe. Free trials will also give readers a taste of what to expect and encourage them to switch to paid subscriptions.

Impact Of Emerging Technologies On Blog Monetization

Impact Of Emerging Technologies On Blog Monetisation

Emerging Technologies have a significant impact on blog monetization, presenting both opportunities and challenges for you. As new technologies evolve, user behaviors change, and tools change as well. You must adopt an effective strategy to stay successful in the ever-changing online landscape. Here are vital aspects of where emerging technologies affect blog monetization and tips on staying successful irrespective of the change.

Content Formats

Video content, virtual reality, and streaming are gaining massive popularity. You will need to change your content format to meet the needs of your audience preference.

Mobile Optimization

With the increasing use of mobile phones, you must ensure your website is optimized for mobile viewing. Consider mobile-friendly designs, responsive layouts, and fast-loading pages to ensure a seamless user experience for your audience.

Social Media Marketing

As social media continues to evolve, it provides new opportunities for you to reach a wider audience. Build a solid social media presence and regularly engage with your audience.

Voice Search and SEO

Over the years, voice search has become more prevalent with the rise of virtual assistants like Alexa and Siri. You will need to optimize your content for voice search queries.

Blockchain

Blockchain technology enables micropayments. You will need to explore blockchain-based platforms to monetize your content through micropayments. Blockchain technology will help your readers to be able to pay for access to premium content, services, or products.

By combining strategic planning with valuable insights, you can succeed in the competitive blogging world and turn your passion into a valuable income stream.

The post The Future of Blog Monetization in 2024: Key Trends and Techniques to Maximize Your Earnings appeared first on CSS Author.

If I Was Starting My Career Today: Thoughts After 15 Years Spent In UX Design (Part 2)

Typography Definitions Cover

In the previous article in my two-part series, I have explained how important it is to start by mastering your design tools, to work on your portfolio (even if you have very little work experience — which is to be expected at this stage), and to carefully prepare for your first design interviews.

If all goes according to plan, and with a little bit of luck, you’ll land your first junior UX job — and then, of course, you’ll be facing more challenges, about which I am about to speak in this second article in my two-part article series.

In Your New Junior UX Job: On the Way to Grow

You have probably heard of the Pareto Rule, which states that 20% of actions provide 80% of the results.

“The Pareto Principle is a concept that specifies that 80% of consequences come from 20% of the causes, asserting an unequal relationship between inputs and outputs. The principle was named after the economist Vilfredo Pareto.”

— “The Pareto Principle, a.k.a. the Pareto Rule

This means that some of your actions will help you grow much faster than others.

But before we go into the details, let’s briefly consider the junior UX designer path. I think it’s clear that, at first, juniors usually assist other designers with simple but time-consuming tasks. Then, the level of complexity and your responsibilities start increasing, depending on your performance.

So, you got your first design job? Great! Here are a few things you can focus on if you want to be growing at a faster pace.

Chase For Challenges

The simple but slow way to go is to do your work and then wait until your superiors notice how good you are and start giving you more complex tasks. The problem is that people focus on themselves too much.

So, to “cut some corners,” you need to actively look for challenges. It’s scary, I know, but remember, people who invented any new groundbreaking UX approach or a new framework you see in books and manuals now used their intuition first. You have the whole World Wide Web full of articles and lectures about that. So, define the skill you want to develop, spend a day reading about this topic, find a real problem, and practice. Then, share what you did and get some feedback. After a few iterations, I bet you will be assigned the first real task for your practice!

Use Interfaces Consciously

Take the time to look again at the screenshot of the Amazon website (from Part One):

User interfaces didn’t appear in their present form right from the start. Instead, they evolved to their current state over the span of many years. And you all were part of their evolution, albeit passively — you registered on different websites, reset your passwords quite a few times, clicked onboarding screens, filled out short and long web forms, used search, and so on.

In your design work, all tasks (or 99% of them, at least at the beginning) will be based on those UX patterns. You don’t need to reinvent the bicycle; you only need to remember what you already know and pay attention to the details while using the interfaces of the apps on your smartphone and on your computer. Ask yourself:

  • Why was this designed this way?
  • What is not clear enough for me as a user?
  • What is thought out well and what is not?

All of today’s great design solutions were built based on common sense and then documented so that other people can learn how to re-use this knowledge. Develop your own “common sense” skill every day by being a careful observer and by living your life consciously. Notice the patterns of good design, try to understand and memorize them, and then implement and rethink them in your own work.

I can also highly recommend the Smart Interface Design Patterns course with Vitaly Friedman. It provides guidelines and best practices for common components in modern interfaces. Inventing a new solution for every problem takes time, and too often, it’s just unnecessary. Instead, we can rely on bulletproof design patterns to avoid issues down the line. This course helps with just that. In the course, you will study hundreds of hand-picked examples, from complex navigation to filters, tables, and forms, and you will work on actual real-life challenges.

Learn How to Present Your Work

The ability to convey complex thoughts and ideas in the form of clear sentences defines how effectively you will be able to interact with other people.

This is a core work skill — a skill that you’ll be actually using your whole life, and not only in your work. I have written about this topic in much detail previously:

“Good communication is about sharing your ideas as clearly as possible.”

— “Effective Communication For Everyday Meetings” (Smashing Magazine)

In my article, I have described all the general principles that apply to effective communication, with the most important being: to develop a skill, you need to practice.

As a quick exercise, try telling your friends about the work you do and not to be boring while explaining the details. You will feel that you are on the right track if they do not try to change the topic and instead ask you additional questions!

Gather Feedback

Don’t wait for your yearly review to hear about what you were doing right and wrong. Ask people for feedback and suggestions, and ask them often.

To help them start, first, tell them about your weak side and ask them to tell you their own impressions. Try encouraging them to expand their input and ask for recommendations on how you could fix your weaknesses. Don’t forget to tell them when you are trying to apply their suggestions in practice. After all, these people helped you become better, so be thankful.

Learn Business

I see a lot of designers trying to apply all of their experience to every project, and they often complain that it doesn’t work — customers refuse to follow the entire classical UX process, such as defining User Personas, creating the Information Architecture (IA), outlining the customer journey map, and so on. Sometimes, it happens because clients don’t have the time and budget for it, or they don’t see the value because the designer can’t explain it in a proper way.

But remember that many great products were built without using all of today’s available and tested UX approaches &mdahs; this doesn’t mean those approaches are useless. But initially, there was only common sense and many attempts to get better results, and only then did someone describe something as a working approach and specify all the details. So, before trying to apply any of these UX techniques, think about what you need to achieve. Is there any other way to get there within your time and budget?

Learn how the business works. Talk to customers in business language and communicate the value you create and not the specific approach, framework, or tool that you’ll be using.

“Good UX design is where value comes into the picture. We add value when we transform a product or service from delivering a poor experience to providing a good experience.”

— “The Value of Great UX,” by Jared Spool
Learn How to Make Interfaces Nice-looking

Yes, user experience should be first, but let’s be honest — we also love nice things! The same goes for your customers; they can’t always see the UX part of your work but can always say whether the interface is good-looking. So, learn the composition and color theory, use elegant illustrations and icons, learn typography, and always strive to make your work visually appealing. Some would say that it’s not so important, but trust me, it is.

As an exercise, try to copy the design of a few beautifully looking interfaces. Take a look at an interface screen, then close it and try to make a copy of it from memory. When you are done, compare the two and then make a few more adjustments in order to have as close a copy of the interface as possible. Try to understand why the original was built the way it is. I bet this process of reproducing an interface will help you understand many things you haven’t been noticing before.

Save the People’s Time and Efforts

Prepare to get some new tasks in advance. Create a list of questions, and don’t forget to ask about the deadlines. Align your plan and the number of iterations so people know precisely what and when to expect from you. Be curious (but not annoying) by asking or sending questions every few hours (but try to first search for the answers online). Even if you don’t find the exact answer, it’ll help you formulate the right questions better and get a better view of the “big picture.” Remember, one day, you will get a task directly from the customer, so fetching the data you need to complete tasks correctly is an excellent skill to develop.

Structurize Your Knowledge and Create a Learning Plan

When you are just beginning to learn, too many articles about UX design will look like absolute “must-reads” to you. But you will drown in the information if you try to read them all in no particular order. Better, instead of just trying to read everything, try first to find a mentor who will help you build a learning plan and will advise you along the way.

Another good way to start is to complete a solid UX online course. If you can’t, take the learning program of any popular UX course out there and research the topics from the course’s list one by one. Also, you can use such a structured list (going from easier to more complex UX topics) for filtering articles you are going to read.

There are many excellent courses out there, and here are a few suggestions:
  • Selection of free UX design courses, including those offering certifications,” by Cheshta Dua
    In this article, the author shares a few free UX design courses which helped her get started as a UX designer.
  • Best free UX design courses — 2024,” by Cynthia Vinney (UX Design Institute)
    This is a comparison of a few free UX design courses, both online and in-person.
  • The 10 Best Free UX Design Courses in 2024,” by Rachel Meltze (CareerFoundry)
    A selection of free UX design courses — using these you can learn the fundamentals of UX design, the tools designers use, and more about the UX design career path.
  • The HTML/CSS Basics (.dev),” by Geoff Graham
    The Basics is an excellent online course that teaches the basic principles of front-end development. It’s a good “entry point” for those just coming into front-end development or perhaps for someone with experience writing code from years ago who wants to jump into modern-day development.
Practice, Practice, Practice

Bruce Lee once said:

“I fear not the man who has practiced 10,000 kicks once, but the man who has practiced one kick 10,000 times.”

— Bruce Lee

You may have read a lot about some new revolutionary UX approaches, but only practicing allows you to convert this knowledge into a skill. Our brain continually works to clear out unnecessary information from our memory. Therefore, actively practicing the ideas and knowledge that you have learned is the only way to signal to your brain that this knowledge is essential to be retained and re-used.

On a related note, you will likely remember also the popular “10,000-hour rule,” which was popularized by Malcolm Gladwell’s bestseller book Outliers).

As Malcolm says, the rule goes like this: it takes 10,000 hours of intensive practice to achieve mastery of complex skills and materials, like playing the violin or getting as good as Bill Gates at computer programming. Turns out, practice is important, and it’s surprising how much time and effort it may take to master something complicated. But later research also suggests that someone could practice for thousands of hours and still not be a master performer. They could be outperformed by someone who practiced less but had a teacher who showed them just what to focus on at a key moment in their practice.

So, remember my advice from the previous section? Try to find a mentor because, as I said earlier, learning and practicing with a mentor and a good plan will often lead to better results.

Conclusion

Instead of a conclusion (or trying to give you the answer to the ultimate question of life, the universe, and everything), only a few final words of advice.

Remember, there doesn’t exist a single correct way to do things because there are no absolute criteria to define “things done properly.” You can apply all your knowledge and required steps in the classical design process, and the product may fail.

At the same time, someone could quickly develop a minimum viable product (MVP) without using all of the standard design phases — and still conquer the market. Don’t believe me?

The first Apple iPhone, introduced 17 years ago, didn’t have even a basic copy/paste feature yet we all know how the iPhone conquered the world (and it’s not only the iPhone, there are many other successful MVP examples out there, often conceived by small startups). It’s because Apple engineers and designers got the core product design concept right; they could release a product that didn’t yet have everything in it.

So yes, you need to read a lot about UX and UI design, watch tutorials, learn the design theory, try different approaches, speak to the people using your product (or the first alpha or beta version of it), and practice. But in the end, always ask yourself, “Is this the most efficient way to bring value to people and get the needed results?” If the answer is “No,” update your design plan. Because things are not happening by themselves. Instead, we, humans, make things happen.

You are the pilot of your plane, so don’t expect someone else to care about your success more than you. Do your best. Make corrections and iterate. Learn, learn, learn. And sooner or later, you’ll reach success!

Further Reading

A Selection Of Design Resources (Part One, Part Two)

  • Photoshop CS Down & Dirty Tricks, a book by Scott Kelby
    Bestselling author Scott Kelby shares an amazing collection of Photoshop tricks, including how to create the same exact effects you see every day in magazines, at the movies, on the Web, and more. These are real-world techniques, the same ones you see used by leading Photoshop photographers, designers, and special effect masters.
  • Why Designers Aren’t Understood,” by Vitaly Friedman (Smashing Magazine)
    How do we conduct UX research when there is no or only limited access to users? Here are some workarounds to run UX research or make a strong case for it. (This article is an upcoming part of the “Smart Interface Design Patterns.” — Editor’s Note)
  • UXchallenge,” by Yachin You
    This website will help you learn how to solve real problems that customers face and present case studies that are related to these problems.
  • Kano analysis: The Kano model explained(Qualtrics)
    Kano analysis (also known as the “Customer Delight vs. Implementation Investment” approach) is a tool that helps you enhance your products and services based on customer emotions. This guide will help you understand what is Kano analysis and how you can use it in practice.
  • Kano Model: What It Is & How to Use It to Increase Customer Satisfaction(Userpilot)
    The Kano model uses quick and powerful data analysis to design your product roadmap. In this article, you will learn a brief history of the Kano model, a practical explanation of how it works, five categories of potential customer reactions to new features, and a four-step process for effective Kano analysis.
  • The Pareto Principle(Investopedia)
    The Pareto Principle is a concept that specifies that 80% of consequences come from 20% of the causes, asserting an unequal relationship between inputs and outputs. Named after the economist Vilfredo Pareto, this principle serves as a general reminder that the relationship between inputs and outputs is not balanced. The Pareto Principle is also known as the Pareto Rule or the 80/20 Rule.
  • Figma Portfolio Templates & Examples(UX Crush)
    A curated selection of portfolio templates for Figma Design.
  • How to Define a User Persona,” by Raven Veal (CareerFoundry)
    As you break into a career in UX, user personas are one tool you’ll certainly want to have available as you gather user research and find design solutions to solve problems and create more human-friendly products and experiences.
  • How to design a customer journey map,” by Emily Stevens (UX Design Institute)
    A customer journey map is a visual representation of how a user interacts with your product. This detailed guide will teach you how to create such a customer journey map.
  • “Building Components For Consumption, Not Complexity” (Part 1, Part 2),” by Luis Ouriach (Smashing Magazine)
    In this two-part series of articles, Luis shares his experience with design systems and how you can overcome the potential pitfalls, starting from how to make designers on your team adopt the complex and well-built system that you created to what are the best naming conventions and how to handle the auto-layout of components, indexing/search, and more.
  • Effective Communication For Everyday Meetings,” by Andrii Zhdan (Smashing Magazine)
    Before any meeting starts, we often have many ideas about what to say and how it should go. But when the meeting happens, reality may “crash” all of our plans. This article is about conducting productive meetings. The author will give you a step-by-step guide on preparing a solid meeting structure that will let you follow the original plan and reach the meeting goals.
  • The Value of Great UX,” by Jared Spool
    This crossover from poor UX design to good UX design is where value comes into the picture. We add value when we transform a product or service from delivering a poor experience to providing a good experience.
  • How Designers Should Ask For (And Receive) High-Quality Feedback,” by Andy Budd (Smashing Magazine)
    Designers often complain about the quality of feedback they get from senior stakeholders. In this article, Andy Budd shares a better way of requesting feedback: rather than sharing a linear case study that explains every design revision, the first thing to do would be to better frame the problem.
  • Designing A Better Design Handoff File In Figma,” by Ben Shih (Smashing Magazine)
    Practical tips to enhance the handoff process between design and development in product development, with provided guidelines for effective communication, documentation, design details, version control, and plugin usage.
  • The HTML/CSS Basics (.dev),” by Geoff Graham
    The Basics is an online course that teaches the basic principles of front-end development, focusing specifically on HTML and CSS. A good “entry point” for those just coming into front-end development and perhaps for someone with experience writing code years ago who wants to jump into modern-day development.
  • Selection of free UX design courses, including those offering certifications,” by Cheshta Dua
    In this article, the author shares a few free UX design courses that helped her get started as a UX designer.
  • Best free UX design courses — 2024,” by Cynthia Vinney (UX Design Institute)
    Check this comparison of several free UX design courses currently on the market, both online and in-person.
  • The 10 Best Free UX Design Courses in 2024,” by Rachel Meltze (CareerFoundry)
    A selection of free UX design courses where you can learn the fundamentals of UX design, the tools designers use, and the UX design career path. This guide provides a range of courses, from micro-tutorials to full-featured UI/UX courses.
  • Researcher Behind ‘10,000-Hour Rule’ Says Good Teaching Matters, Not Just Practice,” by Jeffrey Young (EdSurge Magazine)
    It takes 10,000 hours of intensive practice to achieve mastery of complex skills and materials, like playing the violin or getting as good as Bill Gates at computer programming. Turns out, a study also shows that there’s another important variable that Gladwell originally didn’t focus on: how good a student’s teacher is.
  • An Apple engineer details why the first iPhone didn’t have copy and paste,” by Filipe Espósito (9to5Mac)
    Apple introduced the first iPhone 17 years ago, and a lot has changed since then, but it’s hard to believe that long ago, the iPhone didn’t even have copy-and-paste options. Now, former Apple software engineer Ken Kocienda has revealed details about why the first iPhone didn’t have such features.
  • Fifteen examples of successful MVPs,” Ross Krawczyk (RST Software)
    Startups need to get their products to the market faster than ever in an increasingly competitive world. The minimum viable product is the way to achieve this, but you must be really able to provide the right key features that give value to a wide customer base in order to attract clients and investors on time.