Animating CSS Width and Height Without the Squish Effect

The first rule of animating on the web: don't animate width and height. It forces the browser to recalculate a bunch of stuff and it's slow (or "expensive" as they say). If you can get away with it, animating any transform property is faster (and "cheaper").

Butttt, transform can be tricky. Check out how complex this menu open/close animation becomes in order to make it really performant. Rik Schennink blogs about another tricky situation: border-radius. When you animate the scale of an element in one direction, you get a squishy effect where the corners don't maintain their nice radius. The solution? 9-slice scaling:

This method allows you to scale the element and stretch image 2, 4, 6, and 8, while linking 1, 3, 7, and 9 to their respective corners using absolute positioning. This results in corners that aren’t stretched when scaled. 

It's like the 2020 version of sliding doors.

Direct Link to ArticlePermalink

The post Animating CSS Width and Height Without the Squish Effect appeared first on CSS-Tricks.

Consistent Backends and UX: Why Should You Care?

Article Series

  1. Why should you care?
  2. What can go wrong? (Coming soon)
  3. What are the barriers to adoption? (Coming soon)
  4. How do new algorithms help? (Coming soon)

More than ever, new products aim to make an impact on a global scale, and user experience is rapidly becoming the determining factor for whether they are successful or not. These properties of your application can significantly influence the user experience:

  1. Performance & low latency
  2. The application does what you expect
  3. Security
  4. Features and UI

Let's begin our quest toward the perfect user experience!

1) Performance & Low Latency

Others have said it before; performance is user experience (1, 2). When you have caught the attention of potential visitors, a slight increase in latency can make you lose that attention again. 

2) The application does what you expect

What does ‘does what you expect’ even mean? It means that if I change my name in my application to ‘Robert’ and reload the application, my name will be Robert and not Brecht. It seems important that an application delivers these guarantees, right? 

Whether the application can deliver on these guarantees depends on the database. When pursuing low latency and performance, we end up in the realm of distributed databases where only a few of the more recent databases deliver these guarantees. In the realm of distributed databases, there might be dragons, unless we choose a strongly (vs. eventually) consistent database. In this series, we’ll go into detail on what this means, which databases provide this feature called strong consistency, and how it can help you build awesomely fast apps with minimal effort.  

3) Security

Security does not always seem to impact user experience at first. However, as soon as users notice security flaws, relationships can be damaged beyond repair. 

4) Features and UI 

Impressive features and great UI have a great impact on the conscious and unconscious mind. Often, people only desire a specific product after they have experienced how it looks and feels. 

If a database saves time in setup and configuration, then the rest of our efforts can be focused on delivering impressive features and a great UI. There is good news for you; nowadays, there are databases that deliver on all of the above, do not require configuration or server provisioning, and provide easy to use APIs such as GraphQL out-of-the-box.

What is so different about this new breed of databases? Let’s take a step back and show how the constant search for lower latency and better UX, in combination with advances in database research, eventually led to a new breed of databases that are the ideal building blocks for modern applications. 

The Quest for distribution  

I. Content delivery networks

As we mentioned before, performance has a significant impact on UX. There are several ways to improve latency, where the most obvious is to optimize your application code. Once your application code is quite optimal, network latency and write/read performance of the database often remain the bottleneck. To achieve our low latency requirement, we need to make sure that our data is as close to the client as possible by distributing the data globally. We can deliver the second requirement (write/read performance) by making multiple machines work together, or in other words, replicating data.

Distribution leads to better performance and consequently to good user experience. We’ve already seen extensive use of a distribution solution that speeds up the delivery of static data; it’s called a Content Delivery Network (CDN). CDNs are highly valued by the Jamstack community to reduce the latency of their applications. They typically use frameworks and tools such as Next.js/Now, Gatsby, and Netlify to preassemble front end React/Angular/Vue code into static websites so that they can serve them from a CDN. 

Unfortunately, CDNs aren't sufficient for every use case, because we can’t rely on statically generated HTML pages for all applications. There are many types of highly dynamic applications where you can’t statically generate everything. For example:

  1. Applications that require real-time updates for instantaneous communication between users (e.g., chat applications, collaborative drawing or writing, games).
  2. Applications that present data in many different forms by filtering, aggregating, sorting, and otherwise manipulating data in so many ways that you can’t generate everything in advance. 

II. Distributed databases

In general, a highly dynamic application will require a distributed database to improve performance. Just like a CDN, a distributed database also aims to become a global network instead of a single node. In essence, we want to go from a scenario with a single database node... 

...to a scenario where the database becomes a network. When a user connects from a specific continent, he will automatically be redirected to the closest database. This results in lower latencies and happier end users. 

If databases were employees waiting by a phone, the database employee would inform you that there is an employee closer by, and forward the call. Luckily, distributed databases automatically route us to the closest database employee, so we never have to bother the database employee on the other continent. 

Distributed databases are multi-region, and you always get redirected to the closest node.

Besides latency, distributed databases also provide a second and a third advantage. The second is redundancy, which means that if one of the database locations in the network were completely obliterated by a Godzilla attack, your data would not be lost since other nodes still have duplicates of your data. 

Distributed databases provide redundancy which can save your application when things go wrong. 
Distributed databases divide the load by scaling up automatically when the workload increases. 

Last but not least, the third advantage of using a distributed database is scaling. A database that runs on one server can quickly become the bottleneck of your application. In contrast, distributed databases replicate data over multiple servers and can scale up and down automatically according to the demands of the applications. In some advanced distributed databases, this aspect is completely taken care of for you. These databases are known as "serverless", meaning you don’t even have to configure when the database should scale up and down, and you only pay for the usage of your application, nothing more.

Distributing dynamic data brings us to the realm of distributed databases. As mentioned before, there might be dragons. In contrast to CDNs, the data is highly dynamic; the data can change rapidly and can be filtered and sorted, which brings additional complexities. The database world examined different approaches to achieve this. Early approaches had to make sacrifices to achieve the desired performance and scalability. Let’s see how the quest for distribution evolved. 

Traditional databases' approach to distribution

One logical choice was to build upon traditional databases (MySQL, PostgreSQL, SQL Server) since so much effort has already been invested in them. However, traditional databases were not built to be distributed and therefore took a rather simple approach to distribution. The typical approach to scale reads was to use read replicas. A read replica is just a copy of your data from which you can read but not write. Such a copy (or replica) offloads queries from the node that contains the original data. This mechanism is very simple in that the data is incrementally copied over to the replicas as it comes in.

Due to this relatively simple approach, a replica’s data is always older than the original data. If you read the data from a replica node at a specific point in time, you might get an older value than if you read from the primary node. This is called a "stale read". Programmers using traditional databases have to be aware of this possibility and program with this limitation in mind. Remember the example we gave at the beginning where we write a value and reread it? When working with traditional database replicas, you can’t expect to read what you write. 

You could improve the user experience slightly by optimistically applying the results of writes on the front end before all replicas are aware of the writes. However, a reload of the webpage might return the UI to a previous state if the update did not reach the replica yet. The user would then think that his changes were never saved. 

The first generation of distributed databases

In the replication approach of traditional databases, the obvious bottleneck is that writes all go to the same node. The machine can be scaled up, but will inevitably bump into a ceiling. As your app gains popularity and writes increase, the database will no longer be fast enough to accept new data. To scale horizontally for both reads and writes, distributed databases were invented. A distributed database also holds multiple copies of the data, but you can write to each of these copies. Since you update data via each node, all nodes have to communicate with each other and inform others about new data. In other words, it is no longer a one-way direction such as in the traditional system. 

However, these kinds of databases can still suffer from the aforementioned stale reads and introduce many other potential issues related to writes. Whether they suffer from these issues depends on what decision they took in terms of availability and consistency.   

This first generation of distributed databases was often called the "NoSQL movement", a name influenced by databases such as MongoDB and Neo4j, which also provided alternative languages to SQL and different modeling strategies (documents or graphs instead of tables). NoSQL databases often did not have typical traditional database features such as constraints and joins. As time passed, this name appeared to be a terrible name since many databases that were considered NoSQL did provide a form of SQL. Multiple interpretations arose that claimed that NoSQL databases:

  • do not provide SQL as a query language. 
  • do not only provide SQL (NoSQL = Not Only SQL)
  • do not provide typical traditional features such as joins, constraints, ACID guarantees. 
  • model their data differently (graph, document, or temporal model)

Some of the newer databases that were non-relational yet offered SQL were then called "NewSQL" to avoid confusion. 

Wrong interpretations of the CAP theorem

The first generation of databases was strongly inspired by the CAP theorem, which dictates that you can't have both Consistency and Availability during a network Partition. A network partition is essentially when something happens so that two nodes can no longer talk to each other about new data, and can arise for many reasons (e.g., apparently sharks sometimes munch on Google's cables). Consistency means that the data in your database is always correct, but not necessarily available to your application. Availability means that your database is always online and that your application is always able to access that data, but does not guarantee that the data is correct or the same in multiple nodes. We generally speak of high availability since there is no such thing as 100% availability. Availability is mentioned in digits of 9 (e.g. 99.9999% availability) since there is always a possibility that a series of events cause downtime.

Visualization of the CAP theorem, a balance between consistency and availability in the event of a network partition. We generally speak of high availability since there is no such thing as 100% availability. 

But what happens if there is no network partition? Database vendors took the CAP theorem a bit too generally and either chose to accept potential data loss or to be available, whether there is a network partition or not. While the CAP theorem was a good start, it did not emphasize that it is possible to be highly available and consistent when there is no network partition. Most of the time, there are no network partitions, so it made sense to describe this case by expanding the CAP theorem into the PACELC theorem. The key difference is the three last letters (ELC) which stand for Else Latency Consistency. This theorem dictates that if there is no network partition, the database has to balance Latency and Consistency. 

According to the PACELC theorem, increased consistency results in higher latencies (during normal operation).

In simple terms: when there is no network partition, latency goes up when the consistency guarantees go up. However, we’ll see that reality is still even more subtle than this. 

How is this related to User Experience?

Let’s look at an example of how giving up consistency can impact user experience. Consider an application that provides you with a friendly interface to compose teams of people; you drag and drop people into different teams. 

Once you drag a person into a team, an update is triggered to update that team. If the database does not guarantee that your application can read the result of this update immediately, then the UI has to apply those changes optimistically. In that case, bad things can happen: 

  • The user refreshes the page and does not see his update anymore and thinks that his update is gone. When he refreshes again, it is suddenly back. 
  • The database did not store the update successfully due to a conflict with another update. In this case, the update might be canceled, and the user will never know. He might only notice that his changes are gone the next time he reloads. 

This trade-off between consistency and latency has sparked many heated discussions between front-end and back-end developers. The first group wanted a great UX where users receive feedback when they perform actions and can be 100% sure that once they receive this feedback and respond to it, the results of their actions are consistently saved. The second group wanted to build a scalable and performant back end and saw no other way than to sacrifice the aforementioned UX requirements to deliver that. 

Both groups had valid points, but there was no golden bullet to satisfy both. When the transactions increased and the database became the bottleneck, their only option was to go for either traditional database replication or a distributed database that sacrificed strong consistency for something called "eventual consistency". In eventual consistency, an update to the database will eventually be applied on all machines, but there is no guarantee that the next transaction will be able to read the updated value. In other words, if I update my name to "Robert", there is no guarantee that I will actually receive "Robert" if I query my name immediately after the update.

Consistency Tax

To deal with eventual consistency, developers need to be aware of the limitations of such a database and do a lot of extra work. Programmers often resort to user experience hacks to hide the database limitations, and back ends have to write lots of additional layers of code to accommodate for various failure scenarios. Finding and building creative solutions around these limitations has profoundly impacted the way both front- and back-end developers have done their jobs, significantly increasing technical complexity while still not delivering an ideal user experience. 

We can think of this extra work required to ensure data correctness as a “tax” an application developer must pay to deliver good user experiences. That is the tax of using a software system that doesn’t offer consistency guarantees that hold up in todays webscale concurrent environments. We call this the Consistency Tax.

Thankfully, a new generation of databases has evolved that does not require you to pay the Consistency Tax and can scale without sacrificing consistency!

The second generation of distributed databases

A second generation of distributed databases has emerged to provide strong (rather than eventual) consistency. These databases scale well, won't lose data, and won't return stale data. In other words, they do what you expect, and it's no longer required to learn about the limitations or pay the Consistency Tax. If you update a value, the next time you read that value, it always reflects the updated value, and different updates are applied in the same temporal order as they were written. FaunaDB, Spanner, and FoundationDB are the only databases at the time of writing that offer strong consistency without limitations (also called Strict serializability). 

The PACELC theorem revisited

The second generation of distributed databases has achieved something that was previously considered impossible; they favor consistency and still deliver low latencies. This became possible due to intelligent synchronization mechanisms such as Calvin, Spanner, and Percolator, which we will discuss in detail in article 4 of this series. While older databases still struggle to deliver high consistency guarantees at lower latencies, databases built on these new intelligent algorithms suffer no such limitations.

Database designs influence the attainable latency at high consistency greatly. 

Since these new algorithms allow databases to provide both strong consistency and low latencies, there is usually no good reason to give up consistency (at least in the absence of a network partition). The only time you would do this is if extremely low write latency is the only thing that truly matters, and you are willing to lose data to achieve it. 

intelligent algorithms result in strong consistency and relatively low latencies

Are these databases still NoSQL?

It's no longer trivial to categorize this new generation of distributed databases. Many efforts are still made (1, 2) to explain what NoSQL means, but none of them still make perfect sense since  NoSQL and SQL databases are growing towards each other. New distributed databases borrow from different data models (Document, Graph, Relational, Temporal), and some of them provide ACID guarantees or even support SQL. They still have one thing in common with NoSQL: they are built to solve the limitations of traditional databases. One word will never be able to describe how a database behaves. In the future, it would make more sense to describe distributed databases by answering these questions:

  • Is it strongly consistent? 
  • Does the distribution rely on read-replicas, or is it truly distributed?
  • What data models does it borrow from?
  • How expressive is the query language, and what are its limitations? 

Conclusion

We explained how applications can now benefit from a new generation of globally distributed databases that can serve dynamic data from the closest location in a CDN-like fashion. We briefly went over the history of distributed databases and saw that it was not a smooth ride. Many first-generation databases were developed, and their consistency choices--which were mainly driven by the CAP theorem--required us to write more code while still diminishing the user experience. Only recently has the database community developed algorithms that allow distributed databases to combine low latency with strong consistency. A new era is upon us, a time when we no longer have to make trade-offs between data access and consistency!

At this point, you probably want to see concrete examples of the potential pitfalls of eventually consistent databases. In the next article of this series, we will cover exactly that. Stay tuned for these upcoming articles:

Article Series

  1. Why should you care?
  2. What can go wrong? (Coming soon)
  3. What are the barriers to adoption? (Coming soon)
  4. How do new algorithms help? (Coming soon)

The post Consistent Backends and UX: Why Should You Care? appeared first on CSS-Tricks.

NetApp and DevOps: How We Did It

For the last several years, NetApp has been on our own DevOps journey. We made the decision to adopt DevOps methodologies and technologies into our own systems. Led by our internal IT team, we've taken the steps to a software-defined, cloud-based data center with end-to-end DevOps workflow automation. So, how is it going and what can you learn from our experience?

Want your team to stay focused through your own DevOps journey? Check out Best Practices for Adopting a DevOps Culture.

Gutenberg Hub Launches Collection of 100 Block Templates

Screenshot of the Gutenberg Hub block templates library.

Gutenberg Hub launched its collection of block templates yesterday. The project, which kicked off with 100 free templates, aims to help users create complex layouts by simply copying and pasting block code into the editor.

Munir Kamal, the founder of CakeWP, created Gutenberg Hub in 2017 after he heard about the initial Gutenberg project announcement. “It excited me from the early days, and that pushed me to set up a blog where I can share Gutenberg related stuff,” he said. “It all started with curating the latest updates going on to the Gutenberg project and what others are working on related to Gutenberg. Later on, I started writing some tutorials on the blog to help beginner users get started with Gutenberg.”

He then built out a block directory before following it up with the templates directory. “The goal is to make Gutenberg Hub an excellent resource for Gutenberg users,” he said. He also has other big goals with the site, including a potential theme directory alongside the existing tutorials, templates, and blocks.

Currently, Kamal has a team of five people working on his various CakeWP projects. Some of the team contributed to the template library. One member built the template library system on top of Gatsby, a framework based on React that can pull data from a CMS such as WordPress.

The idea for the template library came to Kamal while he was trying to replicate homepages from WordPress page builders in Gutenberg. “I was able to recreate popular page builders in Gutenberg without any extra plugin,” he said. “But that took me a lot of effort, and I realized a lot could be achieved with core Gutenberg.” He began thinking in terms of how section templates could help him build out pages more quickly. “So, I started creating sections and eventually that grew up into a template library idea.”

With the help of a team member, the two knocked out 100 custom templates in a month. “Honestly, it took a lot of time,” said Kamal. “I will be adding more templates myself for sure. But, what I wish to happen is that other designers, developers, and users contribute and add templates to this library.”

Kamal is currently building a system so that others can add their custom templates. Ultimately, he wants the project to be primarily run by the community. The idea is similar in scope to the ShareABlock community site but with a focus on templates.

With the Gutenberg project’s focus on block patterns this year, it will be interesting to see how projects such as this one will fit into that paradigm. At the very least, the template library will provide useful ideas for the Gutenberg team, providing a sort of roadmap of patterns that may be worth adopting in core WordPress.

“Technically, what I am doing with this library is kind of similar to block patterns,” said Kamal. “I am looking forward to how the block pattern system works and will try my best to make this library work with that.”

A Collection of Templates

Screenshot of using the "Hero 3" template from the Gutenberg Hub block templates library.
Testing the Hero 3 template from the Gutenberg Hub library.

The templates in the library are essentially sections of a page. Users can import multiple sections to create a range of complex layouts.

Unlike other libraries where users may need to import a JSON file, Gutenberg Hub’s templates are completely copy and paste. The site provides a simple “Copy Code” button. Once clicked, the block code is copied to your clipboard, which can be pasted directly into the block editor.

Some of the blocks have custom CSS to handle certain design aspects, which is also copy and paste. Kamal recommends the Blocks CSS plugin by ThemeIsle, which allows users to add CSS directly to the block editor. The other option is for users to add the CSS code via the WordPress customizer. In my experience with some of the templates, the extra CSS was unnecessary to achieve some nice layouts.

With 100 templates and counting, Kamal broke the collection down into 12 categories:

  • Hero
  • Testimonial
  • Team
  • Stats
  • Pricing
  • Logos
  • Gallery
  • Features
  • FAQ
  • Content
  • Contact
  • Card

There is a little something for everyone. The library covers many of the most popular patterns currently found around the web. I am having fun testing out the various templates. Some work better within my theme than others. On the whole, Gutenberg Hub has crafted a solid project.

The contact form templates require the use of the Gutenberg Forms plugin, which is developed and maintained by Kamal. This requirement is because WordPress does not have a built-in form system, so an external plugin was necessary. None of the other templates require a plugin at the moment.

Kamal does not have a favorite template from the collection. He stressed that he was not a designer. “I have tried my best to put together templates that are good and useful in multiple use cases,” he said. “I hope others like the templates as well, and it can be a good starting point for creating a beautiful layout in Gutenberg.”

Integration Dotenv With NestJS and Type ORM

When you are using third-party sources for app development, there is a need for the involvement of SSH keys or API credentials. This goes on to become a problem when it is handled by a team of developers. Thus, the source code has to be pushed to Git repositories periodically. Once the code is pushed to a repository, anyone can see it with third-party keys.

A very prominent and widely used solution for this problem is using environment variables. These are the local variables containing some useful information like API keys and are made available to the application or project.

The Complete Guide on How to Conduct a Sprint Planning Meeting Like a Pro

Ever been a part of a Sprint planning meeting that seemed to last an eternity with no concrete conclusion achieved? Everyone has. And we are here to change that.

This article is all about teaching you how to conduct a Sprint planning meeting that will make your upcoming Sprints more effective, efficient, and, hopefully, less miserable. Let’s start at the very beginning.

Where Is My API Gateway?

Applications are changing and so is the infrastructure required to support these applications. Earlier, as applications were developed and deployed as monoliths, so was the network infrastructure around it. A monolith needed services from the perimeter proxy like security and monitoring. But as the monolithic application gets broken up into several smaller parts, the infrastructure to support it has to change.

At the center of this change is how the proxy has adapted to providing infrastructure for services which are smaller pieces of the monolithic application. The culmination of services has also created a service mesh pattern where typical application functions that were baked into an application (like traceability library integrated with application) have moved to the proxy.

Unit Testing Log Messages Made Easy

As Java developers, we need to cover a lot of scenarios to ensure the quality of our software and catch bugs as soon as possible when introducing new code. For 99% of all my use cases, AssertJ, JUnit, Mockito, and Wiremock are sufficient enough to cover the test cases. But for the other use cases, like unit testing info, debugging or warning log messages, these frameworks don't help you out. There is also no other framework that can provide an easy to use method to capture log messages.

The answer which the community provided works well, but it needs a lot of boilerplate code to just assert your log events. So, I wanted to make it easier for myself and share it with you! This is how the LogCaptor library came into life.

Modern Agile and the State of Digital Transformation

Agile is a software development paradigm created in 2001. It was different from how developers were used to building software. It enabled developers to be more effective at customer value delivery. The agile manifesto contains the principles of agile project management.

Even with all the benefits of agile, developers have identified some limitations in the past few years. Some agile principles appear to be out of date. Also, more often, agile principles only remain in the realm of software development.

Tell Your Junior Dev To Do This Before Your Next Stand-Up

“Yesterday I worked on feature X. Today I’m working on feature Y. No blockers.”

Sound familiar? It’s the update your new junior developer gave in the stand-up meeting this morning. You told them not to ramble, but this isn’t really what you had in mind. When you were explaining stand-up 101, you told them to:

Can Software Leaders Use Metrics Without Damaging Culture?

I'd love to adopt and champion metrics, but I’m afraid that if I do, I’ll lose my team. They’ll think big brother is always looking over their shoulder and start looking for jobs elsewhere. This is a sentiment I’ve heard from several software development leaders when talking about trying to become more data-driven to improve their team’s productivity.

At this fall’s GitHub Universe, one of the best-attended sessions was on how metrics can help dev leaders. The presenter focused mainly on all the pitfalls of measuring the wrong things.

Spring Security — Chapter 1

Spring Security is a framework that provides authentication and authorization to Java applications.

Authentication is used to confirm the identity of a user, and authorization checks the privileges and rights a user has that determine what resources and actions they have access to within an application.

Inspirational Websites Roundup #13

Today we have a special collection of really interesting and creative website designs for you! Some outstanding works have been released over the past few weeks and we’re very excited to share these masterpieces with you. So many cool new trends are mixed and combined including beautiful 3D graphics, outlined text, unusual layouts and neat hover trails.

We hope you enjoy this selection and get a good dose of fresh inspiration!

Bruno Ortolland

Socialclub

Voir plus loin

Kevin van der Wijst

Pest Stop Boys

Luigi De Rosa

Davide Baratta

DDNA

Illuminating Radioactivity

Vallourec

Tula microphones

Alessandra Zanghi Studio

Connect Homes

Six N. Five

LEQB

Bite Toothpaste Bits

Yelloworld

Loer Architecten

Violet Office

Krause Studio

Sandy Dauneau

80s Fever

Elias Akentour

Josh W Comeau

PenzGidroMash

WØRKS

56k

Viens-là

The Markup

PANAMÆRA

Érika Moreira

kern inc.

DEADWATER

unspun

Playground Paris

Zenly

This is Spotify

Inspirational Websites Roundup #13 was written by Mary Lou and published on Codrops.