State of CSS 2020 Survey Results: Tailwind CSS Wins Most Adopted Technology, Utility-First CSS on the Rise

The State of CSS 2020 survey results have just been published, with a summary of the tools, methodologies, frameworks, and libraries that are currently favored by CSS professionals. It includes data from 11,492 respondents in 102 countries, after the questions were translated for the first time into a dozen different languages.

In the layout category, CSS Grid logged a 34% increase over the prior year in respondents who report having used it to position elements on the screen. It won an award for “Most Adopted Feature,” which is assigned to the feature with the largest year-over-year ”have used” progression. Only 6% of respondents said they have used Subgrid, which is included in Level 2 of the CSS Grid Layout specification.

CSS Flexible Box Layout has been used by 97.5% of respondents, a ~3% increase over the previous year. Multi-column Layout saw a moderate increase in usage and a small decrease in awareness. CSS Grid experienced the most growth by far in this category.

The technologies section is one of the most interesting parts of the survey, as the CSS ecosystem is constantly changing. The results include a scatter plot graph showing the relationship between each technology’s satisfaction ratio and its user count. Technologies in the “avoid” and “analyze” groupings are likely to decline in usage soon (or have already fallen out of favor).

Tailwind CSS is once again the front-runner among CSS frameworks, followed by Bulma, which seems to be slowly waning in popularity. Tailwind CSS won the award for “Most Adopted Technology,” given to the technology with the largest year-over-year “would use again” progression, with a +17.8% progression over 2019. PureCSS, Ant Design, and Materialize CSS also recorded gains in their rankings from the previous year.

A larger trend emerging is utility-first CSS frameworks and tools gaining momentum among professionals. The utility-first approach, which eschews traditional semantic class naming in favor of more functional class names, has its ardent critics. It is somewhat of an eyesore reminiscent of inline styles, and essentially drops the “cascading” aspect of CSS. Nevertheless, its proponents appreciate being able to look at the HTML and see at a glance which styles are applied, as well as the enforced consistency it offers.

If you are interested in some of the finer details on which properties and positioning features professionals are using, shapes, graphics, and interactions, check out the full report. Each section has recommended resources for learning more about popular and emerging technologies and techniques, including industry podcasts and blogs that professionals are currently enjoying.

The State of JavaScript survey is also now open, which offers a similar treasure trove of data on the JavaScript ecosystem once the results are published.

Cloudinary Fetch with Eleventy (Respecting Local Development)

This is about a wildly specific combination of technologies — Eleventy, the static site generator, with pages with images on them that you ultimately want hosted by Cloudinary — but I just wanna document it as it sounds like a decent amount of people run into this situation.

The deal:

  • Cloudinary has a fetch URL feature, meaning you don’t actually have to learn anything (nice!) to use their service. You have to have an account, but after that you just prefix your images with a Cloudinary URL and then it is Cloudinary that optimizes, resizes, formats, and CDN serves your image. Sweet. It’s not the only service that does this, but it’s a good one.
  • But… the image needs to be on the live public internet. In development, your image URLs probably are not. They’re likely stored locally. So ideally we could keep using local URLs for images in development, and do the Cloudinary fetching on production.

Multiple people have solved this in different ways. I’m going to document how I did it (because I understand it best), but also link up how several other people have done it (which might be smarter, you be the judge).

The goal:

  • In development, images be like /images/image.png
  • In production, images be like https://res.cloudinary.com/css-tricks/image/fetch/w_1200,q_auto,f_auto/https://production-website.com/images/image.png

So if we were to template that (let’s assume Nunjucks here as it’s a nice templating language that Eleventy supports), we get something like this psuedo-code:

<img src="
  {{CLOUDINARY_PREFIX}}{{FULLY_QUALIFIED_PRODUCTION_URL}}{{RELATIVE_IMAGE_URL}}
  "
  alt="Don't screw this up, fam."
/>
DevelopmentProduction
{{CLOUDINARY_PREFIX}}“”“https://res.cloudinary.com/css-tricks/image/fetch/w_1200,q_auto,f_auto/”
{{FULLY_QUALIFIED_PRODUCTION_URL}}“”“https://production-website.com”
{{RELATIVE_IMAGE_URL}}“/images/image.jpg”“/images/image.jpg”

The trick then is getting those… I guess we’ll call them global variables?… set up. It’s probably just those first two. The relative image path you’d likely just write by hand as needed.

Eleventy has some magic available for this. Any *.js file we put in a _data folder will turn into variables we can use in templates. So if we made like /src/_data/sandwiches.js and it was:

module.exports = {
  ham: true
}

In our template, we could use {{sandwiches.ham}} and that would be defined {{true}}.

Because this is JavaScript (Node), that means we have the ability to do some logic based on other variables. In our case, some other global variables will be useful, particularly the process.env variables that Node makes available. A lot of hosts (Netlify, Vercel, etc.) make “environment variables” a thing you can set up in their system, so that process.env has them available when build processes run on their system. We could do that, but that’s rather specific and tied to those hosts. Another way to set a Node global variable is to literally set it on the command line before you run a command, so if you were to do:

SANDWICH="ham" eleventy

Then process.env.SANDWICH would be ham anywhere in your Node JavaScript. Combining all that… let’s say that our production build process sets a variable indicating production, like:

PROD="true" eleventy

But on local development, we’ll run without that global variable. So let’s make use of that information while setting up some global variables to use to construct our image sources. In /src/_data/images.js (full real-world example) we’ll do:

module.exports = {

  imageLocation:
    process.env.PROD === 'true' 
      ? 'https://coding-fonts.css-tricks.com' 
      : '',

  urlPrefix:
    process.env.PROD === 'true'
      ? 'https://res.cloudinary.com/css-tricks/image/fetch/w_1600,q_auto,f_auto/'
      : ''

};

You could also check process.env.CONTEXT === 'deploy-preview' to test for Netlify deploy preview URLs, in case you want to change the logic there one way or the other.

Now in any of our templates, we can use {{images.imageLocation}} and {{images.urlPrefix}} to build out the sources.

<img 
  src="
    {{images.urlPrefixLarge}}{{images.imageLocation}}/image.png
  "
  alt="Useful alternative text."
/>

And there we go. That will be a local/relative source on development, and then on production, it becomes this prefixed and full qualified URL from which Cloudinary’s fetch will work.

Now that it’s on Cloudinary, we can take it a step further. The prefix URL can be adjusted to resize images, meaning that even with just one source image, we can pull off a rather appropriate setup for responsive images. Here’s that setup, which makes multiple prefixes available, so they can be used for the full syntax.

The end result means locally relative image in development:

The multiple versions are a lie in development, but oh well, srcset is kind of a production concern.

…and Cloudinary fetch URLs in production:

Other People’s Ideas

Phil was showing off using Netlify redirects to do this the other day:

Then the trick to local development is catching the 404’s and redirecting them locally with more redirects.

If hand-crafting your own responsive images syntax is too big of a pain (it is), I highly recommend abstracting it. In Eleventy-land, Nicolas Hoizey has a project: eleventy-plugin-images-responsiver. Eric Portis has one as well, eleventy-respimg, which specifically uses Cloudinary as I have here.

Proving this stuff has really been on people’s minds, Tim Kadlec just blogged “Proxying Cloudinary Requests with Netlify.” He expands on Phil’s tweet, adding some extra performance context and gotchas.


The post Cloudinary Fetch with Eleventy (Respecting Local Development) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Optimize a PDF in Java

As we have discussed before, the PDF is the ideal file format for saving, sharing, and protecting documents, both small and large. Its high compatibility with most Operating Systems makes it popular amongst most users for sharing information with different parties. Furthermore, it provides a more static platform for working with important documents like contracts and manuals, as steps can be taken to prevent any unwanted access or editing to the file. 

With large and highly complex files like this, however, different systems may have difficulty uploading, downloading, and reading the formatting for your document. This can lead to file corruption or increased loading times that can halt productivity. Thus, streamlining large PDF files can greatly benefit organizations that regularly use this format in day-to-day operations. 

Tool to migrate away from jQuery?

Does anyone know of an automated tool that will convert a lot of jQuery code to native javascript?

I think the bulky jQuery library is really slowing down my site but manually converting all my code seems very daunting.

Ideally there would be something that was so flawless I could continue coding with jQuery and it would deploy my code as native JS. Hey, a girl can dream, right?

Monitoring YugabyteDB in Kubernetes With the Prometheus Operator and Grafana

Using the Prometheus Operator has become a common choice when it comes to running Prometheus in a Kubernetes cluster. It can manage Prometheus and Alertmanager for us with the help of CRDs in Kubernetes. The kube-prometheus-stack Helm chart (formerly known as prometheus-operator) comes with Grafana, node_exporter, and more out of the box.

In a previous blog post about Prometheus, we took a look at setting up Prometheus and Grafana using manifest files. We also explored a few of the metrics exposed by YugabyteDB. In this post, we will be setting up Prometheus and Grafana using the kube-prometheus-stack chart. And we will configure Prometheus to scrape YugabyteDB pods. At the end, we will take a look at the YugabyteDB Grafana dashboard that can be used to visualize all the collected metrics.

AI Will Be the Game Changer for IoT

Gartner expects that three trends will affect AI in the next few years. One, better communication (both ways) with people: Natural-language processing, generation, and contextual interpretation will make AI more comfortable to use and will improve the use of all computing resources.

Secondly, more in-depth and broader integration with existing applications and IoT projects: AI has its most significant value when built into architectures that drive business and service value.

HarperDB vs MongoDB vs PostgreSQL

Many people learn or understand new things relative to things they already know. This makes sense, it’s probably a natural instinct. When it comes to products and technology, a lot of people ask “how are you different,” but different from what? You need some sort of baseline to start from, so you can say, “Similar to X, but different because of Y.” Because of this, comparisons, competitive analysis, and feature matrices are a great way to understand which technology solutions are right for you. So today let’s do a comparison of three different database systems.

As stated in my Database Architectures and Use Cases article: In most cases, it’s not that one database is better than the other, it’s that one is a better fit for a specific use case due to numerous factors. The point of this article is not to determine which database is the best, but to help uncover the factors to consider when selecting a database for your specific project. With MongoDB and PostgreSQL being two of the most popular tools out there, you may already know that there are tons of resources comparing the two. However, with HarperDB being a net new database, I thought it might be helpful to throw it in the mix to provide further clarity.

Cracking the Continuous Deployment Code

Relief. The first thing that comes to a developer’s mind when they hear the word “software release,” regardless of whether it’s manual or continuous deployment. That code they’ve been working on for ages is finally live. All bugs cleared. Releasing software means the job is done. And it’s well done.

Everyone’s happy to present their new product. But only those who have really worked on it know the huge amounts of effort it took to get there—especially teams who still do manual deployment instead of continuous deployment.

Pandemic-Driven AI Adoption is Remaking Industries, Creating an Uncertain Future

Throughout 2020, the COVID-19 pandemic has dominated headlines around the world, and rightly so. And for the most part, discussions about how it intersected with the technology world have centered on things like contact tracing and epidemiology research topics. But as the pandemic has continued to batter populations and economies everywhere, there's growing evidence that it's becoming a major digital change driver across a wide variety of industries, too.

One after another, surveys continue to uncover an acceleration of the ongoing trend toward AI adoption in multiple fields. One, in particular, found that 68% of businesses have increased their spending on AI technology as a part of their pandemic response. When all is said and done, we could be witnessing a fundamental shift that will leave the economic landscape looking very different than it did less than a year ago.

Linear Regression Model

Linear regression is a machine learning technique that is used to establish a relationship between a scalar response and one or more explanatory variables. The first scaler response is called a target or dependent variable while the explanatory variables are known as a response or independent variables. When more than one independent variable is used in the modeling technique we call it multiple linear regression.

Independent variables are known as explanatory variables as they can explain the factors that control the dependent variable along with the degree of the impact. This can also be calculated using ‘parameter estimates’ or ‘coefficients’.

Nebula Operator: Automated the Nebula Graph Cluster Deployment and Maintenance on K8s

Nebula Operator is a plug-in to deploy, operate, and maintain Nebula Graph automatically on K8s. Building upon the excellent scalability mechanism of K8s, we introduced the operation and maintenance knowledge of Nebula Graph into the K8s system in the CRD + Controller format, which makes Nebula Graph a real cloud-native graph database.

Nebula Graph is a high-performance distributed open source graph database. From the architecture chart below, we can see that a complete Nebula Graph cluster is composed of three types of services, namely the Meta Service, Query Service (Computation Layer) and Storage Service (Storage Layer).

5 Ways to Adapt Your Analytics Strategy to the New Normal

Covid 19 has upended all traditional business models and made years of carefully curated data and forecasting practically irrelevant. With the world on its head, consumers can’t be expected to behave the same way they did 9 months ago, and we’ve witnessed major shifts in how and where people and businesses are spending their money. This new normal— the “novel economy,” as many have dubbed it—requires business leaders to think on their feet and adjust course quickly while managing the economic impact of lockdowns, consumer fear, and continual uncertainty. The decisions they make today will affect their company’s trajectory for years to come, so it is more important than ever to be empowered to make informed business decisions.

In recent years, organizations across industries have started to implement advanced analytics programs at a record pace, drawn by the allure of increased efficiency and earnings. According to McKinsey, these technologies are expected to offer between $9.5 and $15.4 trillion in annual economic value when properly implemented. However, most organizations struggle to overcome cultural and organizational hurdles, such as adopting agile delivery methods or strong data practices. In other words, adopting advanced analytics programs is happening across the board, but successful implementation takes a long time.

What Is Time-Series Data, and Why Are We Building a Time-Series Database (TSDB)?

Like all good superheroes, every company has its own origin story explaining why they were created and how they grew over time. This article covers the origin story of QuestDB and frames it with an introduction to time series databases to show where we sit in that landscape today.

Part I: Time Series Data and Characteristics of TSDBs

Time-Series Data Explained

Time series is a succession of data points ordered by time. These data points could be a succession of events from an application’s users, the state of CPU and memory usage over time, financial trades recorded every microsecond, or sensors from a car emitting data about the vehicle acceleration and velocity.

Favorite Oculus Quest apps

With the launch of Oculus Quest 2 (I have the original Quest), I was wondering if anyone had any favorite games they could recommend.

Ive been having a lot of fun with Tetris Effect somewhat recently. However, Ive been getting a lot of migraines and have tried to avoid VR the past couple of weeks. I really thing Covid time is just getting to me and my poor sleep habits.

Snowflake Data Encryption and Data Masking Policies

Introduction

Snowflake architecture has been built with security in mind from the very beginning. A wide range of security features are made available from data encryption to advanced key management to versatile security policies to role-based data access and many more, at no additional cost. This post will describe data encryption and data masking functionalities.

Snowflake Security Reference Architecture

Snowflake Security Reference Architecture includes various state-of-the-art security techniques that offer multiple outstanding cloud security capabilities. It includes data encryption while data at rest, secure data transfers while data in transit, role-based table access, column and row-level access to a particular table, network access/IP range filtering, multi-factor authentication, Federated Single Single On, etc.