Breaking Down Data Silos With a Unified Data Warehouse: An Apache Doris-Based CDP

The data silos problem is like arthritis for online businesses because almost everyone gets it as they grow old. Businesses interact with customers via websites, mobile apps, H5 pages, and end devices. For one reason or another, it is tricky to integrate the data from all these sources. Data stays where it is and cannot be interrelated for further analysis. That's how data silos come to form. The bigger your business grows, the more diversified customer data sources you will have, and the more likely you are trapped by data silos. 

This is exactly what happens to the insurance company I'm going to talk about in this post. By 2023, they have already served over 500 million customers and signed 57 billion insurance contracts. When they started to build a customer data platform (CDP) to accommodate such a data size, they used multiple components. 

Virtual Network Functions in VPC and Integration With Event Notifications in IBM Cloud

What Are Virtual Network Functions (VNFs)?

Previously, proprietary hardware performed functions like routers, firewalls, load balancers, etc. In IBM Cloud, we have proprietary hardware like the FortiGate firewall that resides inside IBM Cloud data centers today. These hardware functions are packaged as virtual machine images in a VNF.

VNFs are virtualized network services that are packaged as virtual machines (VMs) on commodity hardware. It allows service providers to run their networks on standard servers instead of proprietary ones. Some of the common VNFs include virtualized routers, firewalls, load balancers, WAN optimization, security, and other edge services. In a cloud service provider like IBM, a user can spin up these VNF images in a standard virtual server instead of proprietary hardware.    

Continuous Improvement as a Team

Cultivating a culture of continuous improvement within Scrum teams or Agile teams is pivotal for personal well-being, enhancing effectiveness, building trust with stakeholders, and delivering products that genuinely enhance customers’ lives.

This post dives into the top ten actionable strategies derived from the Scrum Anti-Patterns Guide book, providing a roadmap for teams eager to embrace Kaizen practices. From embracing Scrum values and fostering psychological safety to prioritizing customer feedback and continuous learning, these strategies offer a comprehensive approach to fostering innovation, collaboration, and sustained improvement.

Cilium: The De Facto Kubernetes Networking Layer and Its Exciting Future

Cilium is an eBPF-based project that was originally created by Isovalent, open-sourced in 2015, and has become the center of gravity for cloud-native networking and security. With 700 active contributors and more than 18,000 GitHub stars, Cilium is the second most active project in the CNCF (behind only Kubernetes), where in Q4 2023 it became the first project to graduate in the cloud-native networking category. A week ahead of the KubeCon EU event where Cilium and the recent 1.15 release are expected to be among the most popular topics with attendees, I caught up with Nico Vibert, Senior Staff Technical Engineer at Isovalent, to learn more about why this is just the beginning for the Cilium project.

Q:  Cilium recently became the first CNCF graduating “cloud native networking” project — why do you think Cilium was the right project at the right time in terms of the next-generation networking requirements of cloud-native?

Simplify, Process, and Analyze: The DevOps Guide To Using jq With Kubernetes

In the ever-evolving world of software development, efficiency and clarity in managing complex systems have become paramount. Kubernetes, the de facto orchestrator for containerized applications, brings its own set of challenges, especially when dealing with the vast amounts of JSON-formatted data it generates. Here, jq, a lightweight and powerful command-line JSON processor, emerges as a vital tool in a DevOps professional's arsenal. This comprehensive guide explores how to leverage jq to simplify, process, and analyze Kubernetes data, enhancing both productivity and insight.

Understanding jq and Kubernetes

Before diving into the integration of jq with Kubernetes, it's essential to grasp the basics. jq is a tool designed to transform, filter, map, and manipulate JSON data with ease. Kubernetes, on the other hand, manages containerized applications across a cluster of machines, producing and utilizing JSON outputs extensively through its API and command-line tools like kubectl.

Rethinking DevOps in 2024: Adapting to a New Era of Technology

As we advance into 2024, the landscape of DevOps is undergoing a transformative shift. Emerging technologies, evolving methodologies, and changing business needs are redefining what it means to implement DevOps practices effectively. This article explores DevOps's key trends and adaptations as we navigate this digital technology transition. 

Emerging Trends in DevOps

Emerging Trends in DevOps

AI and ML Integration

The integration of artificial intelligence (AI) and machine learning (ML) within DevOps processes is no longer a novelty but a necessity. AI-driven analytics and ML algorithms are revolutionizing how we approach automation, problem-solving, and predictive analysis in DevOps.

LangChain, Python, and Heroku

Since the launch and wide adoption of ChatGPT near the end of 2022, we’ve seen a storm of news about tools, products, and innovations stemming from large language models (LLMs) and generative AI (GenAI). While many tech fads come and go within a few years, it’s clear that LLMs and GenAI are here to stay.

Do you ever wonder about all the tooling going on in the background behind many of these new tools and products? In addition, you might even ask yourself how these tools—leveraged by both developer and end users—are run in production. When you peel back the layers for many of these tools and applications, you’re likely to come across LangChain, Python, and Heroku.

Sketchnotes And Key Takeaways From SmashingConf Antwerp 2023

I have been reading and following Smashing Magazine for years — I’ve read many of the articles and even some of the books published. I’ve also been able to attend several Smashing workshops, and perhaps one of the peak experiences of my isolation times was the online SmashingConf in August 2020. Every detail of that event was so well-designed that I felt genuinely welcomed. The mood was exceptional, and even though it was a remote event, I experienced similar vibes to an in-person conference. I felt the energy of belonging to a tribe of other great design professionals.

I was really excited to find out that the talks at SmashingConf Antwerp 2023 were going to be focused on design and UX! This time, I attended remotely again, just like back in 2020: I could watch and live-sketch note seven talks (and I’m already looking forward to watching the remaining talks I couldn’t attend live).

Even though I participated remotely, I got really inspired. I had a lot of fun, and I felt truly involved. There was an online platform where the talks were live-streamed, as well as a dedicated Slack channel for the conference attendees. Additionally, I shared my key takeaways and sketchnotes right after each talk on social media. That way, I could have little discussions around the topics &mdash, even though I wasn’t there in person.

In this article, I would like to offer a brief summary of each talk, highlighting my takeaways (and my screenshots). Then, I will share my sketchnotes of those seven talks (+ two more I watched after the conference).

Day 1 Talks

Introduction

At the very beginning of the conference, Vitaly said hello to everyone watching online, so even though I participated remotely, I felt welcomed. :-) He also shared that there is an overarching mystery theme of the conference, and the first one who could guess it would get a free ticket for the next Smashing conference — I really liked this gamified approach.

Vitaly also reminded us that we should share our success stories as well as our failure stories (how we’ve grown, learned, and improved over time).

We were introduced to the Pac-man rule: if we are having a conversation, and someone is speaking from the back and wants to join, open the door for them — just like Pac-man does (well, Pac-man opens his mouth because he wants to eat, you want to encourage conversations).

In between talks, Vitaly told us a lot of design jokes; for instance, this one related to design systems was a great fit for the first talk:

Where did Gray 500 and Button Primary go on their first date?

To a naming convention.

After this little warm-up, Molly Hellmuth delivered the first talk of the event. Molly has been a great inspiration for me not only as a design system consultant but also as a content creator and community builder. I’m also enthusiastic about learning the more advanced aspects of Figma, so I was really glad that Molly chose this topic for her talk.

“Design System Traps And Pitfalls” by Molly Hellmuth

Molly is a design system expert specializing in Figma design systems, and she teaches a course called Design System Bootcamp. Every time she runs this course, she sees students make similar mistakes. In this talk, she shared the most common mistakes and how to avoid them.

Molly shared the most common mistakes she experienced during her courses:

  • Adopting new features too quickly,
  • Adding too many color variables,
  • Using groups instead of frames,
  • Creating jumbo component sets,
  • Not prepping icons for our design system.

She also shared some rapid design tips:

  • Set the nudge amount to 8
  • We can hide components in a library by adding a period or an underscore
  • We can go to a specific layer by double-clicking on the layer icon
  • Scope variables, e.g., colors meant for text is, only available for text
  • Use auto layout stacking order (it is not only for avatars, e.g., it is great for dropdown menus, too).

“How AI Ate My Website” by Luke Wroblewski

I have been following Luke Wroblewski since the early days of my design career. I read his book “Web Form Design: Filling in the Blanks” back in 2011, so I was really excited to attend his talk. Also, the topic of AI and design has been a hot one lately, so I was very curious about the conversational interface he created.

Luke has been creating content for 27 years; for example, there are 2,012 articles on his website. There are also videos, books, and PDFs. He created an experience that lets us ask questions from AI that have been fed with this data (all of his articles, videos, books, and so on).

In his talk, he explained how he created the interaction pattern for this conversational interface. It is more like a FAQ pattern and not a chatbot pattern. Here are some details:

  • He also tackled the “what I should ask” problem by providing suggested questions below the most recent answer; that way, he can provide a smoother, uninterrupted user flow.

  • He linked all the relevant sources so that users can dig deeper (he calls it the “object experience”). Users can click on a citation link, and then they are taken to, e.g., a specific point of a video.

He also showed us how AI eats all this stuff (e.g., processing, data cleaning) and talked about how it assembles the answers (e.g., how to pick the best answers).

So, to compare Luke’s experience to e.g., Chat GPT, here are some points:

  • It is more opinionated and specific (Chat GPT gives a “general world knowledge” answer);
  • We can dig deeper by using the relevant resources.

You can try it out on the ask.lukew.com website.

“A Journey in Enterprise UX” by Stéphanie Walter

Stéphanie Walter is also a huge inspiration and a designer friend of mine. I really appreciate her long-form articles, guides, and newsletters. Additionally, I have been working in banking and fintech for the last couple of years, so working for an enterprise (in my case, a bank) is a situation I’m familiar with, and I couldn’t wait to hear about a fellow designer’s perspective and insights about the challenges in enterprise UX.

Stéphanie’s talk resonated with me on so many levels, and below is a short summary of her insightful presentation.

On complexity, she discussed the following points:

  1. Looking at quantitative data: What? How much?
    Doing some content analysis (e.g., any duplicates?)
  2. After the “what” and discovering the “as-is”: Why? How?
    • By getting access to internal users;
    • Conducting task-focused user interviews;
    • Documenting everything throughout the process;
    • “Show me how you do this today” to tackle the “jumping into solutions” mindset.

Stéphanie shared with us that there are two types of processes:

  • Fast track
    Small features, tweaks on the UI — in these cases, there is no time or no need to do intensive research; it involves mostly UI design.
  • Specific research for high-impact parts
    When there is a lot of doubt (“we need more data”). It involves gathering the results of the previous research activities; scheduling follow-up sessions; iterating on design solutions and usability testing with prototypes (usually using Axure).
    • Observational testing
      “Please do the things you did with the old tool but with the new tool” (instead of using detailed usability test scripts).
    • User diary + longer studies to help understand the behavior over a period of time.

She also shared what she wishes she had known sooner about designing for enterprise experiences, e.g., it can be a trap to oversimplify the UI or the importance of customization and providing all the data pieces needed.

It was also very refreshing that she corrected the age-old saying about user interfaces: you know, the one that starts with, “The user interface is like a joke...”. The thing is, sometimes, we need some prior knowledge to understand a joke. This fact doesn’t make a joke bad. It is the same with user interfaces. Sometimes, we just need some prior knowledge to understand it.

Finally, she talked about some of the main challenges in such environments, like change management, design politics and complexity.

Her design process in enterprise UX looks like this:

  • Complexity
    How am I supposed to design that?
  • Analysis
    Making sense of this complexity.
  • Research
    Finding and understanding the puzzle pieces.
  • Solution design
    Eventually, everything clicks into place.

The next talk was about creating a product with a Point of View, meaning that a product’s tone of voice can be “unique,” “unexpected,” or “interesting.”

“Designing A Product With A Point Of View” by Nick DiLallo

Unlike in the case of the other eight speakers whose talks I sketched, I wasn’t familiar with Nick’s work before the conference. However, I’m really passionate about UX writing (and content design), so I was excited to hear Nick’s points. After his talk, I have become a fan of his work; check out his great articles on Medium).

In his talk, Nick DiLallo shared many examples of good and not-so-good UX copies.

His first tip was to start with defining our target audience since the first step towards writing anything is not writing. Rather, it is figuring out who is going to be reading it. If we manage to define who will be reading as a starting point, we will be able to make good design decisions for our product.

For instance, instead of designing for “anyone who cooks a lot”, it is a lot better to design for “expert home chefs”. We don’t need to tell them to “salt the water when they are making pasta”.

After defining our audience, the next step is saying something interesting. Nick’s recommendation is that we should start with one good sentence that can unlock the UI and the features, too.

The next step is about choosing good words; for example, instead of “join” or “subscribe,” we can say “become a member.” However, sometimes we shouldn’t get too creative, e.g., we should never say “add to submarine” instead of “add to cart” or “add to basket”.

We should design our writing. This means that what we include signals what we care about, and the bigger something is visual, the more it will stand out (it is about establishing a meaningful visual hierarchy).

We should also find moments to add voice, e.g., the footer can contain more than a legal text. On the other hand, there are moments and places that are not for adding more words; for instance, a calendar or a calculator shouldn’t contain brand voice.

Nick also highlighted that the entire interface speaks about who we are and what our worldview is. For example, what options do we include when we ask the user’s gender?

He also added that what we do is more important than what we write. For example, we can say that it is a free trial, but if the next thing the UI asks is to enter our bank card details, well, it’s like saying that we are vegetarian, and then we eat a cheeseburger in front of me.

Nick closed his talk by saying that companies should hire writers or content designers since words are part of the user experience.

“When writing and design work together, the results are remarkable.”

“The Invisible Power of UI Typography” by Oliver Schöndorfer

This year, Oliver has quickly become one of my favorite design content creators. I attended some of his webinars, I’m a subscriber of his Font Friday newsletter, and I really enjoy his “edutainment style”. He is like a stand-up comedian. His talks and online events are full of great jokes and fun, but at the same time, Oliver always manages to share his extensive knowledge about typography and UI design. So I knew that the following talk was going to be great. :)

During his talk, Oliver redesigned a banking app screen live, gradually adding the enhancements he talked about. His talk started with this statement:

“The UI is the product, and a big part of it is the text.”

After that, he asked an important question:

“How can we make the type work for us?”

Some considerations we should keep in mind:

  • Font Choice
    System fonts are boring. We should think about what the voice of our product is! So, pick fonts that:
    • are in the right category (mostly sans, sometimes slabs),
    • have even strokes with a little contrast (it must work in small sizes),
    • have open-letter shapes,
    • have letterforms that are easy to distinguish (the “Il1” test).

  • Hierarchy
    i.e. “What is the most important thing in this view?”

Start with the body text, then emphasize and deemphasize everything else — and watch out for the accessibility aspects (e.g. minimum contrast ratios).

Accessibility is important, too!

  • Spacing
    Relations should be clear (law of proximity) and be able to define a base unit.

Then we can add some final polish (and if it is appropriate, some delight).

As Oliver said, “Go out there and pimp that type!

Day 2 Talks

“Design Beyond Breakpoints” by Christine Vallaure

I’m passionate about the designer-developer collaboration topic (I have a course and some articles about it), so I was very excited to hear Christine’s talk! Additionally, I really appreciate all the Figma content she shares, so I was sure that I’d learn some new exciting things about our favorite UI design software.

Christine’s talk was about pushing the current limits of Figma: how to do responsive design in Figma, e.g., by using the so-called container queries. These queries are like media queries, but we are not looking at the viewport size. Instead, we are looking at the container. So a component behaves differently if, e.g., it is inside a sidebar, and we can also nest container queries, e.g., tell an icon button inside a card that upon resizing, the icon should disappear).

Recommended Reading: A Primer On CSS Container Queries by Stephanie Eckles

She also shared that there is a German fairy tale about a race between a hedgehog and a rabbit. The hedgehog wins the race even though he is slower. Since he is smarter, he sends his wife (who looks exactly like him) to the finish line in advance. Christine told us that she had mixed feelings about this story because she didn’t like the idea of pretending to be fast when someone has other great skills. In her analogy, the rabbits are the developers, and the hedgehogs are the designers. Her lesson was that we should embrace each others’ tools and skills instead of trying to mimic each others’ work.

The lesson of the talk was not really about pushing the limits. Rather, the talk was about reminding us of why we are doing all this:

  • To communicate our design decisions better to the developers,
  • To try out how our design behaves in different cases (e.g., where it should break and how), and
  • It is also great for documentation purposes; she recommended the EightShapes Specs plugin by Nathan Curtis.

Her advice is:

  • We should create a playground inside Figma and try out how our components and designs work (and let developers try out our demo, too);
  • Have many discussions with developers, and don’t start these discussions from zero, e.g., read a bit about frontend development and have a fundamental knowledge of development aspects.

“It’s A Marathon, And A Sprint” by Fabricio Teixeira

If you are a design professional, you have surely encountered at least a couple of articles published by the UX Collective, a very impactful design publication. Fabricio is one of the founders of that awesome corner of the Internet, so I knew that his talk would be full of insights and little details. He shared four case studies and included a lot of great advice.

During his talk, Fabricio used the analogy of running. When we prepare for a long-distance running competition, 80% of the time, we should do easy runs, and 20% of the time should be devoted to intensive because short interval runs get the best results. He also highlighted that just like during a marathon running, things will get hard during our product design projects, but we must remember how much we trained. When someone from the audience asked how not to get overly confident, he said that we should build an environment of trust so that other people on our team can make us realize if we’ve become too confident.

He then mentioned four case studies; all of these projects required a different, unique approach and design process:

  • Product requirements are not required.
    Vistaprint and designing face masks — the world needed them to design really fast; it was a 15-day sprint, and they did not have time to design all the color and sizing selectors (and only after the launch did it turn into a marathon).

  • Timelines aren’t straight lines.
    The case study of Equinox treadmill UI: they created a fake treadmill to prototype the experience; they didn’t wait for the hardware to get completed (the hardware got delayed due to manufacturing issues), so there was no delay in the project even in the face of uncertainty and ambiguity. For example, they took into account the hand reach zones, increased the spacing between UI elements so that these remained usable even while the user was running, and so on.

Exciting challenge: Average treadmill interface, a complicated dashboard, everything is fighting for our attention.

  • Research is a mindset, not a step.
    He mentioned the Gofundme project, where they applied a fluid approach to research meaning that design and research ran in parallel, the design informed research and vice versa. Also, insights can come from anyone from the team, not just from researchers. I really liked that they started a book club, everyone read a book about social impact, and they created a Figma file that served as a knowledge hub.

  • Be ready for some math
    During the New York City Transit project, they created a real-time map of the subway system, which required them to create a lot of vectors and do some math. One of the main design challenges was, “How to clean up complexity?”

Fabricio shared that we should be “flexibly rigorous”: just as during running, we should listen to our body, we should listen to the special context of a given project. There is no magic formula out there. Rigor and discipline is important, but we must listen to our body so that we don’t lose touch of reality.

The key takeaway is that because, we as a design community focus a lot on processes, and of course there is no one way to do design, we should combine sprints and marathons, adjust our approach to the needs of the given project, and most of all, focus more on principles, e.g. how we, as a team, want to work together?

A last note is when Fabricio mentioned in the post-talk discussion with Vitaly Friedman that having a 1–3-hour long kick-off meeting with our team is too short, we will work on something for e.g. 6 months, so Fabricio’s team introduced kick-off weeks.

Kat delivered one of the most important talks (or maybe the most important talk) of the conference. The ethics of design is a topic that has been around for many years now. Delivering a talk like this is challenging because it requires a perspective that easily gets lost in our everyday design work. I was really curious about how Kat would make us think and have us question our way of working.

“Design Ethically: From Imperative To Action” by Kat Zhou

Kat’s talk walked us through our current reality such as how algorithms have built in biases, manipulate users, hide content that shouldn’t be hidden, and don’t block things that shouldn’t be allowed. The main question, however, is:

Why is that happening? Why do designers create such experiences?

Kat’s answer is that companies must ruthlessly design for growth. And we, as designers, have the power to exercise control over others.

She showed us some examples of what she considers oppressive design, like the Panopticon by Jeremy Bentham. She also provided an example of hostile architecture (whose goal is to prevent humans from resting in public places). There are also dark patterns within digital experiences similar to the New York Times subscription cancellation flow (users had to make a call to cancel).

And the end goal of oppressive design is always to get more user data, more users’ time, and more of the users’ money. What amplifies this effect is that from an employee’s (designer’s) perspective, the performance is tied to achieving OKRs.

Our challenge is how we might redesign the design process so that it doesn’t perpetuate the existing systems of power. Kat’s suggestion is that we should add some new parts to the design process:

  • There are two phases:
    Intent: “Is this problem a worthy problem to solve?”
    Results: “What consequences do our solutions have? Who is it helping? Who is it harming?”
  • Add “Evaluate”:
    “Is the problem statement we defined even ethically worthy of being addressed?”
  • Add “Forecast”:
    “Can any ethical violations occur if we implement this idea?”
  • Add “Monitor”:
    “Are there any new ethical issues occurring? How can we design around them?”

Kat shared a toolkit and framework that help us understand the consequences of the things we are building.

Kat talked about forecasting in more detail. As she said,

“Forecasted consequences often are design problems.”

Our responsibility is to design around those forecasted consequences. We can pull a product apart by thinking about the layers of effect:

  • The primary layer of effect is intended and known, e.g.: Google Search is intended and known as a search engine.
  • The secondary effect is also known, and intended by the team, e.g. Google Search is an ad revenue generator.
  • The tertiary effect: typically unintended, possibly known, e.g. Algorithms of Oppression, Safiya Umoja Noble talks about the biases built in Google Search.

So designers should define and design ethical primary and secondary effects, and forecast tertiary effects, and ensure that they don’t pose any significant harm.

I first encountered atomic design in 2015, and I remember that I was so fascinated by the clear logical structure behind this mental model. Brad is one of my design heroes because I really admire all the work he has done for the design community. I knew that behind the “clickbait title” (Brad said it himself), there’ll be some great points. And I was right: he mentioned some ideas I have been thinking about since his talk.

“Is Atomic Design Dead?” by Brad Frost

In the first part of the talk, Brad gave us a little WWW history starting from the first website all the way to web components. Then he summarized that design systems inform and influence products and vice versa.

I really liked that he listed three problematic cases:

  • When the design system team is very separated, sitting in their ivory tower.
  • When the design system police put everyone in the design system jail for detaching an instance.
  • When the product roadmaps eat the design system efforts.

He then summarized the foundations of atomic design (atoms, molecules, organisms, templates and pages) and gave a nice example using Instagram.

He answered the question asked in the title of the talk: atomic design is not dead, since it is still a useful mental model for thinking about user interfaces, and it helps teams find a balance, and equilibrium between design systems and products.

And then here came the most interesting and thought-provoking part: where do we go from here?

  1. What if we don’t waste any more human potential on designing yet another date picker, but instead, we create a global design system together, collaboratively? It’d be an unstyled component that we can style for ourselves.

  2. The other topic he brought up is the use of AI, and he mentioned Luke Wroblewski’s talk, too. He also talked about the project he is working on with Kevin Coyle: it is about converting a codebase (and its documentation) to a format that GPT 4 can understand. Brad showed us a demo of creating an alert component using ChatGPT (and this limited corpus).

His main point was that since the “genie” is out of the bottle, it is on us to use AI more responsibly. Brad closed his talk by highlighting the importance of using human potential and time for better causes than designing one more date picker.

Mystery Theme/Other Highlights

When Vitaly first got on stage, one of the things he asked the audience to keep an eye out for was an overarching mystery theme that connects all the talks. At the end of the conference, he finally revealed the answer: the theme was connected to the city of Antwerp!

Where does the name "Antwerp" come from? “Hand werpen” or “to throw a hand”. Once upon a time, there was a giant that collected money from everyone passing the river. One time, a soldier came and just cut off the hand of this giant and threw it to the other side, liberating the city. So, the story and the theme were “legends.” For instance, Molly Hellmuth included Bigfoot (Sasquatch), Stéphanie mentioned Prometheus, Nick added the word "myth" to one of his slides, Oliver applied a typeface usually used in fairy tales, Christine mentioned Sisyphus and Kat talked about Pandora’s box.

My Very Own Avatar

One more awesome thing that happened thanks to attending this conference is that I got a great surprise from the Smashing team! I won the hidden challenge 'Best Sketch Notes', and I have been gifted a personalized avatar created by Smashing Magazine’s illustrator, Ricardo.

Full Agenda

There were other great talks — I’ll be sure to watch the recordings! For anyone asking, here is the full agenda of the conference.

A huge thanks again to all of the organizers! You can check out all the current and upcoming Smashing conferences planned on the SmashingConf website anytime.

Saving The Best For Last: Photos And Recordings

The one-and-only Marc Thiele captured in-person vibes at the event — you can see the stunning, historic Bourla venue it took place in and how memorable it all must have been for the attendees! 🧡

For those who couldn’t make it in person and are curious to watch the talks, well, I have good news for you! The recordings have been recently published — you can watch them over here:


Thank you for reading! I hope you enjoyed reading this as much as I did writing it! See you at the next design & UX SmashingConf in Antwerp, maybe?

Chris’ Corner: Cool Ideas

Lossy compression can be good. For something like a JPG, that naturally uses compression that can be adjusted, that compression leads to lost image data. In a good way. The image may end up being much smaller, which is good for performance. Sometimes the lost image data just isn’t a big deal, you barely notice it.

You don’t often think of lossy compression outside of media assets though. You can certainly compress a text asset like CSS, but you wouldn’t want to do that with a lossy compression. It would probably break syntax! The code wouldn’t work correctly! It’s total madness!

Unless you’re just having a bit of fun — like Daniel Janus was doing there.

  • Original: pageCSSsource SASS
  • 1 style rule: pageCSS (93% information loss)
  • 5 style rules: pageCSS (74% information loss)
  • 10 style rules: pageCSS (55% information loss)
  • 20 style rules: pageCSS (31% information loss)
  • 30 style rules: pageCSS (17% information loss)

When I think of JavaScript-based syntax highlighters, my go-to is Prism.js. I think it’s mostly used client-side but I don’t see why you couldn’t run it server-side (and in fact you probably should).

But I digress already, I’m trying to link to Shiki, a syntax highlighter tool I hadn’t seen before. The results look really nice to me:

It’s based on TextMate grammars, using the same exact system as VS Code. And it can be run in any JS runtime. Pretty compelling to me!

I was able to Hello, World! it on CodePen very easily.


“Pull to refresh” is a wonderfully satisfying UI/UX interaction famously first invented by Tweetie. It’s the kind of interaction where you might assume you need some deeper level of control over view primitives to pull off, and that we essentially lack this control on the web. You’d be wrong though, as Adam Argyle has a really interesting solution for it.

The trick Adam uses is that the page starts scrolled down a smidge by virtue of the <main> region having a hard scroll snapping point. With that in place, you could scroll upwards to see another element (which says “Pull to refresh”), but the page would immediately scroll snap back down if you let go. Turns out this is pretty tricky, involving using a very deliberately sized snapping point and even an animation to delay the adding of the snap point. Then Scroll Driven Animations are used to animate things as you scroll into that area, and a smidge of JavaScript, which you’d use anyway to “refresh” things, is used to fake the refreshing and put the page back in the original position

See the demo, it’s very cool.


If you were going to pick some colors, you could do worse than just stealing them from Dieter Rams designs.

Speaking of ol’ Dieter this collection of Framer components (thus work on the web through just HTML/CSS and sometimes SVG) is very impressive. Just really elegant buttons and controls that have that “beg to be touched” look.


Have you heard people saying “LoFi” lately? Kinda sounds like a subgenre of electronic music. But no, it stands for “Local First Web Development”. And even that twists my brain a little bit, because I’m like “yes obviously we all work on local development environments now, the edit-on-the-server days are all but gone”. But they don’t mean local development, they mean that the apps architecture should support storing data, possibly to be synced later, locally. This has many advantages, as Nils Riedemann writes:

“local first” is an approach to developing web applications in such a way, that they could be used offline and the client retains a copy of their data. That also effectively eliminates many loading spinners and latency issues. Kind of like optimistic updates, but on steroids with PWA, Wasm and a bunch of other new abbreviations and acronyms.

The premise is that it’s much easier to consider this behavior as the default and then expand, rather than the other way around.

There are some major apps like Figma that do this, and it’s fairly easy to point to them as examples and be like “see, good.” I don’t disagree, really. Especially the fact that it can help support the PWA offline approach, that just feels right. They have a homepage for the movement. There are some technologies that help support it. For instance, the concept of CRDTs to help sync data in a way where you can merge data rather than have one side or the other “win” is pretty transformative.

How to Add Title Attribute in WordPress Navigation Menus

Are you looking to add title attributes to your WordPress navigation menu items?

The title attribute allows you to provide extra information about a menu item. It often appears as tooltip text when a user’s mouse moves over the link.

In this article, we will show you how to add title attributes in WordPress navigation menus for both classic themes and block themes.

How to Add Title Attribute in WordPress Navigation Menus

Why Add Title Attributes to Navigation Menu Items?

In WordPress, you can add a title attribute to better describe any HTML element. This is often used with links and images to provide extra information that appears as a tooltip when the user hovers their mouse over the element.

Here’s an example of an image title attribute displayed in a tooltip. The user can learn more information about the image by moving their mouse over it.

An image with the title text

You can learn more in our guide on the difference between image alt text vs. title.

We also recommend you use the title attribute when adding links to your post. This allows users to see where the link will take them before they click it.

Some SEO experts believe that the link attribute is useful for search engine optimization (SEO) because it allows you to provide more context.

The title attribute may also be read out loud by screen readers that are used by visually impaired users. However, it is often ignored, and the anchor text is read instead.

With that being said, let’s take a look at how to add the title attribute in WordPress navigation menus. You can use the links below to jump to the method that works with your theme:

Adding a Title Attribute to Classic Theme Menu Items

If you are using a classic WordPress theme, then you can customize your navigation menu by visiting Appearance » Menus in your dashboard.

However, you are not able to add a title attribute to menu entries by default.

To add this capability, you will need to click the ‘Screen Options‘ tab in the top right corner of the screen. This will bring down a menu, where you need to click on the check box next to the ‘Title Attribute’ option.

Enabling Image Attributes in Screen Options

This will add a title attribute field for when you create or edit a menu entry.

Now, you can scroll down and click on any menu item in your existing menu to expand it. You will see the title attribute field.

Adding a Title Attribute to a Classic Theme Menu Item

You can now add the text you want to use as a title. You can also expand other menu items and add title attributes to them.

Don’t forget to click on the ‘Save Menu’ button at the bottom of the page to store your changes.

You can now visit your WordPress website and take your mouse over a link in the navigation menu. You will see the title attribute displayed as a tooltip.

Preview of a Menu Entry Title Attribute When Using a Classic Theme

Adding a Title Attribute to Block Theme Menu Items

If you are using a block theme, then you can customize your navigation menu using the Full Site Editor. This editor allows you to add title attributes to your menu entries by default.

First, you need to navigate to Appearance » Editor in your WordPress admin area and then click on the ‘Navigation’ option to find your menus.

Go to the Navigation Section of the Full Site Editor and Select a Menu

You will need to select the menu you wish to edit from the list.

Now, you can click the preview pane on the left to open the editor full screen. Make sure you can see the settings pane on the left. If not, then you can display it by clicking the ‘Settings’ button at the top of the screen.

Click a Menu Entry in the Settings Pane

Next, click the menu item in the settings pane that you wish to edit. This will display the options for that entry, including the title attribute.

Simply type your title into the ‘Title Attribute’ field.

Add a Tiitle Attribute in the Settings Pane

Make sure you click the ‘Save’ button at the top of the screen to store the new settings. You will need to click a second ‘Save’ button to confirm.

Now, you can visit your website to see the menu title attribute in action.

Preview of a Menu Entry Title Attribute in a Block Theme

Expert Guides for Customizing WordPress Navigation Menus

Now that you know how to add a title attribute in your navigation menu, you may like to see some other articles related to customizing the WordPress navigation menu:

We hope this tutorial helped you learn how to add a title attribute in WordPress navigation menus. You may also want to see our guide on how to increase your blog traffic or our expert pick of the best contact form plugins for WordPress.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add Title Attribute in WordPress Navigation Menus first appeared on WPBeginner.

Power BI: Transforming Banking Data

As a data architect or data engineer, you know how vital it is to fully understand the power of your data. This task holds even more gravity in the banking sector, where every decision and strategy bears severe financial and reputational risks.

There are multiple tools in the market that can help you simplify data analysis and transform it into a strategic asset. Today, I would like to tell you about one of the best instruments available – Power BI. With its advanced analytics and visualizations, this suite of business analytics tools is well-suited to tackle the intricate challenges banks face daily, transforming your data into actionable insights.

Integrating Snowflake With Trino

In today's discourse, we delve into the intricacies of accessing Snowflake via the Trino project. This article illuminates the seamless integration of Trino with Snowflake, offering a comprehensive analysis of its benefits and implications.

Previous Articles

Previous articles on Snowflake and Trino:

Crafting a Custom Sports Activity Service With OpenAI and Node.js

Enter the realm of Artificial Intelligence (AI), where machine learning models and intelligent algorithms are revolutionizing how we interact with data, make decisions, and predict outcomes. The fusion of AI with Node.js opens a portal to a multitude of possibilities, transforming the landscape of web services across various domains, including sports and fitness.

Lifecycle

At the base of the application, we going to have some sports activity daily suggestions which will be generated based on AI. That means we do an activity that AI suggested, then we have to ask AI again about suggestions. Then we get a response from AI on our application and expose it for the next day. Then users will see that new suggestion plan again and again day by day.

AI-Driven API and Microservice Architecture Design for Cloud

Incorporating AI into API and microservice architecture design for the Cloud can bring numerous benefits. Here are some key aspects where AI can drive improvements in architecture design:

  • Intelligent planning: AI can assist in designing the architecture by analyzing requirements, performance metrics, and best practices to recommend optimal structures for APIs and microservices.
  • Automated scaling: AI can monitor usage patterns and automatically scale microservices to meet varying demands, ensuring efficient resource utilization and cost-effectiveness.
  • Dynamic load balancing: AI algorithms can dynamically balance incoming requests across multiple microservices based on real-time traffic patterns, optimizing performance and reliability.
  • Predictive analytics: AI can leverage historical data to predict usage trends, identify potential bottlenecks, and offer proactive solutions for enhancing the scalability and reliability of APIs and microservices.
  • Continuous optimization: AI can continuously analyze performance metrics, user feedback, and system data to suggest improvements for the architecture design, leading to enhanced efficiency and user satisfaction.

By integrating AI-driven capabilities into API and microservice architecture design on Azure, organizations can achieve greater agility, scalability, and intelligence in managing their cloud-based applications effectively. 

Initializing Services in Node.js Application

While working on a user model, I found myself navigating through best practices and diverse strategies for managing a token service, transitioning from straightforward functions to a fully-fledged, independent service equipped with handy methods. I delved into the nuances of securely storing and accessing secret tokens, discerning between what should remain private and what could be public. Additionally, I explored optimal scenarios for deploying the service or function and pondered the necessity of its existence. This article chronicles my journey, illustrating the evolution from basic implementations to a comprehensive, scalable solution through a variety of examples.

Services

In a Node.js application, services are modular, reusable components responsible for handling specific business logic or functionality, such as user authentication, data access, or third-party API integration. These services abstract away complex operations behind simple interfaces, allowing different parts of the application to interact with these functionalities without knowing the underlying details. By organizing code into services, developers achieve separation of concerns, making the application more scalable, maintainable, and easier to test. Services play a crucial role in structuring the application’s architecture, facilitating a clean separation between the application’s core logic and its interactions with databases, external services, and other application layers. I decided to show an example with JWT Service. Let’s jump to the code.

Claude 3 Opus Vs. Google Gemini Vs. GPT-4 for Zero-Shot Text Classification

On March 4, 2024, Anthropic launched the Claude 3 family of large language models. Anthropic claimed that its Claude 3 Opus model outperforms GPT-4 on various benchmarks.

Intrigued by Anthropic's claim, I performed a simple test to compare the performances of Claude 3 Opus, Google Gemini Pro, and OpenAI's GPT-4 for zero-shot text classification. This article explains the experiment and the results obtained, along with my personal observations.

Note: I have already compared the performance of Google Gemini Pro and Chat-GPT on another dataset, in one of my previous articles. This article adds Claude 3 Opus to the list of compared models. In addition, the tests are performed on a significantly more difficult dataset.

So, let's begin without an ado.

Importing and Installing Required Libraries

The following script installs the corresponding APIs for importing Claude 3 Opus, Google Gemini Pro, and OpenAI GPT-4 models.


!pip install anthropic
!pip install --upgrade google-cloud-aiplatform
!pip install openai

The script below imports the required libraries.


import os
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

import anthropic
from openai import OpenAI
import vertexai
from vertexai.preview.generative_models import GenerativeModel, Part
Importing and Preprocessing the Dataset

We will use LLMs to make zero-shot predictions on the US Airline Sentiment dataset, which you can download from Kaggle.

The dataset consists of tweets regarding various US airlines. The tweets are manually annotated for positive, negative, or neutral sentiments. The text column contains the tweet texts, while the airline_sentiment column contains sentiment labels.

The following script imports the dataset, prints the dataset shape, and displays the dataset's first five rows.


## Dataset download link
## https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment?select=Tweets.csv

dataset = pd.read_csv(r"D:\Datasets\tweets.csv")
print(dataset.shape)
dataset.head()

Output:

image1.png

The dataset originally consisted of 14640 records. However, for comparison, we will randomly select 100 records with equal proportion of tweets belonging to each sentiment category, e.g., 34 for neutral and 33 each for positive and negative sentiments. The following script selects 100 random tweets.


# Remove rows where 'airline_sentiment' or 'text' are NaN
dataset = dataset.dropna(subset=['airline_sentiment', 'text'])

# Remove rows where 'airline_sentiment' or 'text' are empty strings
dataset = dataset[(dataset['airline_sentiment'].str.strip() != '') & (dataset['text'].str.strip() != '')]

# Filter the DataFrame for each sentiment
neutral_df = dataset[dataset['airline_sentiment'] == 'neutral']
positive_df = dataset[dataset['airline_sentiment'] == 'positive']
negative_df = dataset[dataset['airline_sentiment'] == 'negative']

# Randomly sample records from each sentiment
neutral_sample = neutral_df.sample(n=34)
positive_sample = positive_df.sample(n=33)
negative_sample = negative_df.sample(n=33)

# Concatenate the samples into one DataFrame
dataset = pd.concat([neutral_sample, positive_sample, negative_sample])

# Reset index if needed
dataset.reset_index(drop=True, inplace=True)

# print value counts
print(dataset["airline_sentiment"].value_counts())

Output:

airline_sentiment
neutral     34
positive    33
negative    33
Name: count, dtype: int64

We are now ready to perform zero-shot classification with various large language models.

Zero Shot Text Classification with Google Gemini Pro

To access the Google Gemini Pro model via the Google Cloud API, you need to create a project in the VertexAI Service Account and download the JSON credentials file for the project. Next, you need to create an environment variable GOOGLE_APPLICATION_CREDENTIALS and set its value to the path of the JSON file you just downloaded.

os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "PATH_TO_VERTEX_AI_SERVICE_ACCOUNT JSON FILE"

The rest of the process is straight-forward. You must create an Object of the GenerativeModel class and pass the input query to the model.generate_content() method.

In the following script, we define the find_sentiment_gemini() function, which accepts a tweet and returns its sentiment. Pay attention to the content variable. It contains the prompt we will pass to our Google Gemini Pro model. The prompt will also remain the same for the rest of the models.


model = GenerativeModel("gemini-pro")
config = {
    "max_output_tokens": 10,
    "temperature": 0.0,
}

def find_sentiment_gemini(tweet):

    content = """What is the sentiment expressed in the following tweet about an airline?
    Select sentiment value from positive, negative, or neutral. Return only the sentiment value in small letters.
    tweet: {}""".format(tweet)

    responses = model.generate_content(
        content,
        generation_config= config,
    stream=True,
    )

    for response in responses:
        return response.text

Finally, we iterate through all the tweets from the text column in our dataset and pass the tweets to the find_sentiment_gemini() method. The response is saved in the all_sentiments list. We time the script to see how long it takes to execute.


%%time

all_sentiments = []

tweets_list = dataset["text"].tolist()

i = 0
exceptions = 0
while i < len(tweets_list):

    try:
        tweet = tweets_list[i]
        sentiment_value = find_sentiment_gemini(tweet)
        all_sentiments.append(sentiment_value)
        i = i + 1
        print(i, sentiment_value)

    except Except as e:
        print("===================")
        print("Exception occured", e)
        exception = exception + 1

print("Total exception count:", exceptions)

Output:

Total exception count: 0
CPU times: total: 312 ms
Wall time: 54.5 s

The about output shows that the script took 54.5 seconds to run.

Finally, you can calculate model accuracy by comparing the values in the airline_sentiment columns of the dataset with the all_sentiments list.

accuracy = accuracy_score(all_sentiments, dataset["airline_sentiment"])
print("Accuracy:", accuracy)

Output:

Accuracy: 0.78

You can see that the model achieves 78% accuracy.

Zero Shot Text Classification with GPT-4

Let's now perform zero-shot classification with GPT-4. The process remains the same. We will first create an OpenAI client using the OpenAI API key.

Next, we define the find_sentiment_gpt() function, which internally calls the OpenAI.chat.completions.create() method to generate a response for the input tweet.

client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)


def find_sentiment_gpt(tweet):

    content = """What is the sentiment expressed in the following tweet about an airline?
    Select sentiment value from positive, negative, or neutral. Return only the sentiment value in small letters.
    tweet: {}""".format(tweet)

    sentiment = client.chat.completions.create(
      model= "gpt-4",
      temperature = 0,
      max_tokens = 10,
      messages=[
            {"role": "user", "content": content}
        ]
    )

    return sentiment.choices[0].message.content

Next, we iterate through all the tweets and pass each tweet to the find_sentiment_gpt() function. The responses are stored in the all_sentiments list.


%%time

all_sentiments = []

tweets_list = dataset["text"].tolist()

i = 0
exceptions = 0
while i < len(tweets_list):

    try:
        tweet = tweets_list[i]
        sentiment_value = find_sentiment_gpt(tweet)
        all_sentiments.append(sentiment_value)
        i = i + 1
        print(i, sentiment_value)

    except Except as e:
        print("===================")
        print("Exception occured", e)
        exception = exception + 1

print("Total exception count:", exceptions)

Output:

Total exception count: 0
CPU times: total: 250 ms
Wall time: 49.4 s

The GPT-4 model took 49.4 seconds to process 100 tweets.

The following script prints the model's accuracy.

accuracy = accuracy_score(all_sentiments, dataset["airline_sentiment"])
print("Accuracy:", accuracy)

Output:

Accuracy: 0.79

The output shows that GPT-4 achieves a slightly better accuracy (79%) than Google Gemini Pro (78%).

Zero shot Text Classification with Claude 3 Opus

Finally, let's try the so-called best, Claude 3 Opus. To generate text using Claude 3, you need to create a client object of the anthropic.Anthropic class and pass it your Anthropic API Key, which you can retrieve by signing up for Claude console.

You can call the message.create() method from the Anthropic client to generate a response.

The following script defines the find_sentiment_claude() method that returns the sentiment of a tweet using the Claude Opus 3 model.

client = anthropic.Anthropic(
    # defaults to os.environ.get("ANTHROPIC_API_KEY")
    api_key = os.environ.get('ANTHROPIC_API_KEY')
)

def find_sentiment_claude(tweet):

    content = """What is the sentiment expressed in the following tweet about an airline?
    Select sentiment value from positive, negative, or neutral. Return only the sentiment value in small letters.
    tweet: {}""".format(tweet)

    sentiment = client.messages.create(
        model="claude-3-opus-20240229",
        max_tokens=1000,
        temperature=0.0,
        messages=[
            {"role": "user", "content": content}
        ]
    )

    return sentiment.content[0].text

We can pass all the tweets to the find_sentiment_claude() function and store the corresponding responses in the all_sentiments list. Finally, we can compare response predicitons with the actual sentiment labels to calculate the model's accuracy.

%%time

all_sentiments = []

tweets_list = dataset["text"].tolist()

i = 0
exceptions = 0
while i < len(tweets_list):

    try:
        tweet = tweets_list[i]
        sentiment_value = find_sentiment_claude(tweet)
        all_sentiments.append(sentiment_value)
        i = i + 1
        print(i, sentiment_value)

    except Except as e:
        print("===================")
        print("Exception occured", e)
        exception = exception + 1

print("Total exception count:", exceptions)

accuracy = accuracy_score(all_sentiments, dataset["airline_sentiment"])
print("Accuracy:", accuracy)

Output:

Total exception count: 0
Accuracy: 0.71
CPU times: total: 141 ms
Wall time: 3min 8s

The above output shows that the Claude Opus took 3 minutes and 8 seconds to process 100 tweets and achieved an accuracy of only 71%, substantially lower than GPT-4 and Gemini Pro. Given Anthropic's big claims, I was not impressed.

Conclusion

The results from the experiments in this article show that despite Anthropic's high claims, the performance of Claude Opus 3 for a simple task such as zero-shot text classification was not up to the mark. I would still use GPT-4 or Gemini Pro for zero-shot text classification tasks.

Securing AWS RDS SQL Server for Retail: Comprehensive Strategies and Implementation Guide

In the retail industry, the security of customer data, transaction records, and inventory information is paramount. As many retail stores migrate their databases to the cloud, ensuring the security of these data repositories becomes crucial. Amazon Web Services (AWS) Relational Database Service (RDS) for SQL Server offers a powerful platform for hosting retail databases with built-in security features designed to protect sensitive information. This article provides a detailed guide on securing AWS RDS SQL Server instances, tailored for retail stores, with practical setup examples.

Understanding the Importance of Database Security in Retail

Before delving into the specifics of securing an RDS SQL Server instance, it's essential to understand why database security is critical for retail stores. Retail databases contain sensitive customer information, including names, addresses, payment details, and purchase history. A breach could lead to significant financial loss, damage to reputation, and legal consequences. Therefore, implementing robust security measures is not just about protecting data but also about safeguarding the business's integrity and customer trust.