Micro.blog Adds Tumblr Cross-Posting

Over the weekend, Micro.blog added Tumblr cross-posting to its service in response to Automattic’s acquisition of the company. Micro.blog users can now elect to have their blog posts automatically syndicated to Tumblr.

Although Tumblr is somewhat of a competitor to Micro.blog’s social networking and microblogging service, founder Manton Reece said he sees Tumblr as more of a social network:

I usually avoid adding blog hosting services to Micro.blog’s available cross-posting destinations. After all, if it’s a good blog host that I could recommend as your primary blog, why not just post everything there instead of using Micro.blog’s own blog hosting? But the more I’ve used Tumblr in the last couple of weeks, the more I think about Tumblr as a community first and a blog host second.

Micro.blog may bear some similarities to Tumblr but the service has an entirely different flavor. It has become an alternative watering hole for indie web enthusiasts with its support for Webmention and Micropub protocols. Many who use the service seem to already be convinced of the value of hosting a blog that is independent from the major social media silos.

Micro.blog already had several Tumblr-related features built in to the platform. Users can follow Tumblr blogs on Micro.blog by visiting the Discover feed and plugging in any Tumblr domain name. Micro.blog users can also add their own Tumblr feeds to their accounts so followers can see posts from both their main microblog and the Tumblr blog.

On a recent episode of the Core Intuition podcast titled “A Much Bigger Megaphone,” Reece and co-host Daniel Jalkut, the developer of MarsEdit, speculate on the future of Tumblr and discuss some key differences from Micro.blog’s service and social network. Micro.blog acts more as an aggregate of blogs from around the web, whereas Tumblr’s blogging aspect is limited to Tumblr accounts only.

Both networks aim to make blogging easier and seem to focus on shorter-style posts. However, Micro.blog is more of a social network for independent microbloggers who want to connect their content to a stream of blogs. Tumblr is a blogging service that has a symbiotic relationship with the communities its publishing capabilities enable. It has the potential to become the most important social network on the open web, given its active user base and Automattic’s commitment to independent publishing.

Both Reece and Jalkut said they were optimistic that Automattic’s acquisition of Tumblr will introduce more opportunities for both Micro.blog and MarsEdit, as the company’s influence makes it easier to market the value of owning your own blog. For a long time, Tumblr’s API hasn’t supported some of the key features of MarsEdit and Jlkut said he is hopeful that with Automattic at the helm the API may change to support the types of things his customers need.

In a post published shortly after the acquisition, Reece said he believes Tumblr has a lot of overlap with Micro.blog and views Automattic as having a “shared vision of the future that embraces content ownership, supports healthy communities, and deemphasizes massive social networks.” Those who value blogs and blogging are hopeful that Tumblr’s new ownership will rekindle some of the social magic that was present in the early days of the web but has since become more scarce.

On Par With Window Functions

Use a golf analogy when explaining to executives.
Use a car analogy for all others. — Confucius.

The purpose of window functions is to translate the business reporting requirements declaratively and effectively to SQL so query performance and developer/business-analyst efficiency improve dramatically.

php Central Europe Conference Canceled Due to Lack of Speaker Diversity

phpCE, a central European PHP conference that was previously scheduled for October 4-6, has been cancelled due to a public fiasco resulting from a lack of gender diversity in the speaker lineup. The event, previously known as PHPCon Poland, was set to be held in Dresden, Germany, after the concept changed last year to rotate host cities and include a larger region of the PHP community.

After phpCE had boasted a “rich and diverse lineup,” the published schedule was criticized for including zero women, while several speakers were given two sessions apiece. The 2018 event had a similar lack of diversity among speakers. CFP Land founder Karl Hughes’ tweet precipitated a flood of critical feedback.

Organizers received the public criticisms as an attack, a response that disappointed many who were previously considering attending the event. Speakers started to withdraw from the conference and ticket sales dried up, as organizers demonstrated an unwillingness to do further diversity outreach beyond their initial call for proposals.

Mark Baker, one of the speakers who decided to cancel his engagement, said organizers attempted to persuade him not to withdraw by offering to put the sole female speaker applicant on the schedule. Baker said he was uncomfortable, as that would put “a lot of pressure on the woman, knowingly being invited to speak after an all-male speaker list has already been announced, making her a ‘token’ to diversity.”

“It wasn’t an easy decision to make, because I do enjoy sharing my coding passion; but having advocated for diversity at PHP developer conferences for the last several years, I have to follow my beliefs that diversity should be a cornerstone of the PHP developer community,” Baker said. “Diversity matters more to me than speaking.”

Larry Garfield (@crell), who is active in the Drupal community, reported that he also tried to work with phpCE’s organizers to diversify the lineup before being forced by his personal convictions to withdraw.

“I messaged the organizers, asking them to drop some of our double-sessions in favor of more female participation,” Garfield said. “We also offered to work with them to figure out ways to reduce the cost of bringing us in (a number of us were transatlantic, and Dresden is not the cheapest city to get to) so they could afford to cover more speakers.

“Unfortunately, the organizers indicated they were not open to such an arrangement. According to them, they had only a single woman submit a session proposal this year despite having women present in previous years, and hers was a repeat from a local conference last year. They were also firm that the Call For Papers was done and over and they’re not open to reaching out to new people now. Sadly, from what the organizers told me, they actively don’t want to do outreach.”

A situation that has gone this far is often irreparable once it reaches the point of becoming an international debacle. If a diverse speaker selection hasn’t been established before the schedule announcement goes out, backpedaling to arrive at inclusion inevitably sends a signal to potential attendees that this might not be a welcoming event.

Due to the way it was handled, phpCE’s cancellation became a spectacular failure of inclusion that played out in a public way over the past several weeks. phpCE’s organizers remained defensive in their replies to critics on social media, clinging instead to what the community has deemed to be an outmoded and ineffective approach to organizing more diverse events.

phpCE did not publish a post regarding why the event was cancelled but rather cited several blog posts and exchanges on social media as factors in the decision.

How WordPress Is Equipping Event Organizers to Create more Diverse Speaker Lineups

Many organizers of large tech events are making proactive attempts at getting more diverse speakers and the web is full of countless resources from those who have shared their processes and tips on the topic. In the WordPress world specifically, the Community team has created a Diversity Speaker Training Workshop to help meetup and WordCamp organizers cultivate better representation from different groups in their communities.

This particular workshop, which was created by Jill Binder and sponsored by Automattic, has produced positive results in 55 WordPress communities in 26 different countries.

“All of the communities that held this workshop experienced a real change in the speaker roster for their annual conferences; many of their WordCamps went from having 10% women speakers to having 50% or more women speakers in less than a year,” community organizer Andrea Middleton said. “In 2017, Seattle had 60% women speakers and in 2018, Vancouver had 63%.” Organizers of large events like WordCamp US and WordCamp Miami have also created more diverse lineups in recent years with their own proactive strategies.

The Diversity Speaker Training Workshop seems to be particularly effective because it focuses on actively creating and equipping future speakers in a more organic way at the local level. Any WordPress event organizers who feel they have no options for increasing the diversity of their events can get help from the Community team. Relying on the call for speakers to deliver a diverse lineup is not always the most effective strategy. In many cases, it takes a great deal of work to bring in diverse speakers, but the Community Team has worked for years to pioneer new resources that help organizers succeed in these efforts.

The Vegebot That Can Harvest Lettuce

On the surface, agriculture doesn’t appear the most tech-savvy of industries, yet it is one in which considerable innovation is unfolding. Nowhere is this more so than in the advance of robotics that are able to perform a wide range of tasks involved in managing, harvesting and growing crops.

The latest example comes from a recent study from the University of Cambridge, in which a VegeBot device is highlighted that can successfully recognize and harvest iceberg lettuce using machine learning.

How to Implement Kubernetes

To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "What are the most important elements of implementing K8s for orchestrating containers?" Here’s what we learned:

Security

  • Four things: 1) security, 2) you don’t have to go “all-in” on K8s, don’t use it for databases, 3) capacity planning CPU, 4) K8s structure will mimic team structure. 
  • Networking, storage, security and monitoring, and management capabilities are all essential elements for implementing Kubernetes container orchestration. Businesses stand to realize tremendous benefits due to the fast pace at which both Kubernetes and container ecosystems are currently advancing. However, this pace also increases the challenge of keeping up with new advancements and functionality that are critical to success – especially in the area of security.

Planning

  • A lot is around planning. Prevent surprises with a lot of power and complexity. Learn how to set up the right environments. 
  • Future-proof your architecture/strategy for multi-cloud, platform-agnostic and whatever else comes our way. It’s easier to build decoupled and distributed applications with K8s that can run anywhere. Adopting different programming languages and frameworks also becomes easier and that allows you to use the best tool for the job when building our applications. There are new challenges when we enable more and more API communication over a network across our applications. Critical performance and security issues increase as you accelerate the flow of information across our services as does being able to observe the traffic and collect metrics at a much bigger scale in order to debug errors down the road. While Kubernetes allows us to deploy our workloads across multiple clouds and platforms, it's vital that we adopt platform-agnostic technology that can be deployed within our Kubernetes cluster instead of relying too much on cloud vendor solutions to secure and monitor traffic. Increasing team productivity and business value is very dependent on speed. Consolidating how we manage and operate our services across every environment, across every cloud makes it faster to both create new services and consume these services from our client applications.

Experience

  • K8s is still new though it’s been around for five years. There’s lack of expertise and talent — that’s the number one challenge. What you want from an enterprise standpoint is a standardized, shared platform to use on multiple clouds and on-prem. Containers are portable, K8s has the standard open-source API you can build a platform that can run anywhere. Create a shared platform that can run anywhere. Challenge is having the right people with the skill and then day two operations. Once in production, you have to deal with day two operation like upgrades, a new version out every three months. How to keep patched, back up, disaster recovery, scale. It abstracts the infrastructure from the developers. A declarative approach. To define the end state of what you want K8s to do, you just declare that. Tell K8s what you want, and it makes it happen. If something fails it, it will automatically recreate it. The downside of that is if something goes wrong with the system, now you have to search through multiple levels of abstraction to figure out where the problem is. To have a successful implementation, you need a team that knows what’s happening across the landscape. If it fails, you need to know the nitty-gritty details of all of the services that are running. Troubleshooting, debugging, upgrading the cluster, SLA management, day-two operations are challenging today. 
  • Learning the technology, there’s a learning curve. Every type of developer probably knows but the data engineering side is quite new. First step making it easy for developers to understand what the pieces are and how to be used. Then important aspects of data locality within the K8s cluster. Making it stateful and stateless when needed. Important concepts to explain to end-users and how to fit with K8s.

Data Locality

  • It depends on what they are trying to achieve with containers. A lot of customers want portability between on-prem and public cloud or deploying a scalable container platform. One of the aspects is the differentiation between stateless and stateful applications. Think about how to claim and reclaim storage resources deal with security, performance, reliability, availability all of the traditional data center operations topics. Containers support that through persistent volume claims and persistent volume storage. There is a shift in how developers need to think about having to take advantage of persistent storage as they move from stateless to stateful. 
  • How you divide your application into smaller services is a critical decision to get right. But for K8s specifically, it’s really important to think about how you are going to handle state: whether it’s using Stateful Sets, leveraging your provider’s block storage devices, or moving to a completely managed storage solution, implementing stateful services correctly the first time around is going to save you huge headaches. The other important question is what your application looked like before you began implementing K8s. Already had hundreds of services? Your biggest concern should be how to migrate those services in an incremental but in a seamless way. Just breaking the first few services off of your monolith. Making sure you have the infrastructure to support lots of services are going to be critical to a successful implementation.

Other

  • 1) Labels are your friends, label everything. They are the road map to be able to figure out where things are going. 2) Keep in mind you don’t know where anything is. Build your environment to be specific to a purpose, not to a location. In K8s it’s not as small as possible, it’s as small as necessary. Don’t over-engineer your environment to create a thousand tiny little things – deliver the information needed from each component.
  • We are seeing more people adopt K8s — different types of deployments, different flavors, different approaches to use. Some customers use a “build your own” approach. We are seeing people using on-prem vendors offering pre-packaged K8s distributions (e.g., Mesosphere, Docker, VMware). A lot is available on public cloud vendors. We see people adopting a consulting-based approach. The exact mix of what you pick depends on what kind of apps you are running on K8s and what kind of users you are servicing, and how advanced you are with your K8s deployments. We see a lot of reliance on cloud and on-prem (Red Hat and IBM are the most prominent). We recommend making sure you understand where you are in your journey, and who your users are to figure out the right mix. Make sure when deploying these technologies you start with people. People need to work well together when services are split between teams in terms of technology, culture, and people in engineering and ops.
  • Declarative APIs. The customer says here’s what I want and know it will happen. Applications will be better if they are stateless. Able to get its state from somewhere else like the database. Observability is a huge issue across a broad number of microservices.
  • The overall strategy of automating testing is critical. We see clients trying to find the right way to test. There is a huge variety of techniques and approaches. What needs to be tested, how are you set up, what is your maturity, what is the right level of automation? Test the right things in the right way, what tests can be run in parallel, how to deal with data management, how to leverage orchestration capability. What are the right devices you want to include? It depends on the maturity of the team and the software. Integrations, what else is your testing touching on? What are dependencies? When environments cannot take the scale and you fail in your expectations of what’s possible.
  • K8s alone won’t solve your problem. It’s not an enterprise-grade orchestration stack. You should have the same concerns for K8s as when you put software into production – security, monitoring and debugging. There are 500+ open source products for cloud-native networking. It's impossible to keep up with and maintain. K8s comes out with new releases all of the time. 
  • Think about how to manage configuration, how to use managed services, resource management, how to apply AI to K8s infrastructure. Managed services and credential management.
  • We have a consulting package where we do a lot of training around developing and managing K8s clusters. Look for micro-improvements along with the massive ecosystem with 500 different open source tools. Each is a new area of discovery for people getting into cloud-native computing. We help customers consume open-source with little to no friction with security updates.
  • The most important elements of implementing K8s to orchestrate containers is its ability to declaratively define application policies that are enforced at application runtime to maintain the desired state (e.g. the number of application pods, their types, and attributes) to ensure critical applications always remain available. Most recently, auto-scaling pods have also become a very important element to ensure predefined applications SLAs are always met. As well, the ease of deploying containers is an important element. Companies require the ability to develop, test, and deploy container-based applications quickly and seamlessly using their CI/CD pipelines.
  • I think the main thing to keep in mind is how important core infrastructure is to be successful with Kubernetes.  What I mean by this is that you need to have your ducks in a row with storage and networking especially.
  • 1) Have a plan first, driven by your goals for moving to K8s. Moving from monolithic apps to microservices running on Kubernetes has many benefits, but trying to solve every problem at the same time is a recipe for delayed migration, and frustration. Know what you’re trying to achieve (or better yet, the sequence of goals you’re trying to achieve) and design a plan to accomplish those. The roadmap is key. Think about how you stage the adoption of K8s and the migration from monolith to microservice and how that will get rolled out across the organization. There’s a tremendous amount of new technology in the cloud-native ecosystem; fold that technology into the roadmap, too. Realize that the roadmap can and will change as you gain experience with each piece of that new technology stack. 2) Don’t forget that a new implementation doesn’t eliminate the need to address all the old requirements around Operations, Security, and Compliance. Factors to consider: What kind of app are you creating? Internal, or external? Will it have customer data? How often will it be updated?  Questions to answer: who has access, and how will you enforce that access? Kubernetes to the rescue: Kubernetes provides a revolutionary way of implementing custom guardrails so that you can prevent problems before they happen. Kubernetes lets you inject custom rules and regulations right into the API server (via Admission Control) that enforce an unprecedented level of control. And because Kubernetes provides a uniform way of representing resources that used to be contained in silos (e.g. compute, storage, network), you can impose cross-silo controls. 3) Take your policy out of PDFs and put it into the code. When your infrastructure is code, and your apps are code, so too should your policy be code. The business needs developers to push code rapidly — to improve the business’s software faster, ideally, than competitors — but the business also needs that software to follow the same age-old operations, security, and compliance rules and regulations. The only way to succeed at both is to automate the enforcement of those rules and regulations by pulling them out of PDFs and wikis and moving them into the software. That’s what policy-as-code is all about.
  • Ensure the application is built as a set of independent microservices that are loosely coupled to serve the business. This helps get the most out of Kubernetes. Ensure microservices have built-in resilience (to handle failures), observability (to monitor application), and administrative features (to allow for elastic scaling, data backup, access control, and security, etc.). Essentially, having the application architected the right way is critical to reaping the benefits of Kubernetes.
  • One of the most important elements is ensuring K8s remain simple enough for developers to use. Developers are growing more committed to Kubernetes: in 2016, just under half said they were committed to the technology but by 2017, 77 percent said the same. Despite Kubernetes’ growing popularity, it is still often challenging for developers to manage manually. Our approach focuses on ensuring that clusters are configured for high availability, stability, and best practices. Kubernetes has many knobs that can be turned to limit resources, segregate components, and configure the way the system performs. It can be challenging to do this on your own so we have worked hard to provide users with a platform that has best practices baked in from the start.

Here’s who shared their insights:

Running Alluxio-Presto Sandbox in Docker

The Alluxio-Presto sandbox is a Docker application featuring installations of MySQL, Hadoop, Hive, Presto, and Alluxio. The sandbox lets you easily dive into an interactive environment where you can explore Alluxio, run queries with Presto, and see the performance benefits of using Alluxio in a big data software stack.

In this guide, we’ll be using Presto and Alluxio to showcase how Alluxio can improve Presto’s query performance by caching our data locally so that it can be accessed at memory speed!

Which Three Hot Markets Are Undergoing Cloud-Native Disruption?

Cloud-native computing is perhaps the most important trend in enterprise IT today. At its core, cloud-native extends the benefits of cloud computing to the entire IT landscape, including on-premises tech as well as the edge.

The open-source container orchestration platform Kubernetes has grabbed the cloud-native flag, with a surprisingly rapid uptake in adoption across the Global 2000. Kubernetes, however, is only part of the cloud-native story.

Top 5 Challenges of DevSecOps and How to Overcome Them

DevSecOps emphasizes the need for better collaboration between development, operations, and security. It is the constant integration of efforts of all teams at every step of the process. The ultimate goal is to move into a world that is automated and synced, making most of the manual tasks obsolete.

But to get there, there are changes to be made not just to the process but to the behavior as well. However, according to a survey by Threat Stack, 68% of companies state that their CEO demands security and DevOps teams not do anything that slows down the business. This is one of the biggest challenges of DevSecOps and why many quit the transition halfway.

The Road to Continuous Integration in Unity

Have you ever had a doubt while developing a new feature? The doubt that tells you what you are doing could break something within the project? Or worse, you don’t have that doubt and you make the project explode anyway?

In this first part, we will explain what Continuous Integration is and how it can help us. At the end of the post, we will have Unity tests running in every change we make into our project using Gitlab CI/CD.

How to Test the Test Automation Framework: Types of Tests

Nowadays, more and more companies are building test automation frameworks based on WebDriver and Appium for testing their web and mobile projects. A big part of why there are so many flaky tests is that we don't treat our tests as production code. Moreover, we don't treat our framework as a product. I will present to you all the challenges during the testing of our test automation framework. In the first article from the series, I gave you an overview of our test infrastructure like what tools we use for source control, packages storage, CI and test results visualization. In this second part, we will discuss various types of testing that we did to verify that each aspect of our framework is working as expected — functional, compatibility, integration, usability, installability and many more.

What We Have to Test

Before I share the details of our test infrastructure, I need to explain what we have to test. Our test automation framework is written on .NET Core in C#. This way we achieve the cross-platform requirement. It also has modules for API, desktop, web, Android, iOS and load testing. Generally, there are two ways to use the framework. Most of our clients use it by installing NuGet packages. (NuGet is the package manager for .NET. NuGet client tools provide the ability to produce and consume packages.) To ease the process of configuring the projects we provide a Windows UI installer and CLI installer for Windows and OSX. The UI installer also adds various projects, item templates, and snippets for better user experience in Visual Studio IDE.

Reusable Popovers to Add a Little Pop

A popover is a transient view that shows up on top of a content on the screen when a user clicks on a control button or within a defined area. For example, clicking on an info icon on a specific list item to get the item details. Typically, a popover includes an arrow pointing to the location from which it emerged.

Popovers are great for situations when we want to show a temporary context to get user’s attention when interacting with a specific element on the screen. They provide additional context and instruction for users without having to clutter up a screen. Users can simply close them by clicking the same way they were opened or outside the popover.

We’re going to look at a library called popper.js that allows us to create reusable popover components in the Vue framework. Popovers are the perfect type of component for a component-based system like Vue because they can be contained, encapsulated components that are maintained on their own, but used anywhere throughout an app.

Let’s dig in and get started.

But first: What’s the difference between a popover and tooltip?

Was the name "popover" throwing you for a loop? The truth is that popovers are a lot like tooltips, which are another common UI pattern for displaying additional context in a contained element. There are differences between them, though, so let’s briefly spell them out so we have a solid handle on what we’re building.

Tooltips Popovers
Tooltips are meant to be exactly that, a hint or tip on what a tool or other interaction does. They are meant to clarify or help you use the content that they hover over, not add additional content. Popovers, on the other hand, can be much more verbose, they can include a header and many lines of text in the body.
Tooltips are typically only visible on hover, for that reason if you need to be able to read the content while interacting with other parts of the page then a tooltip will not work. Popovers are typically dismissible, whether by click on other parts of the page or second clicking the popover target (depending on implementation), for that reason you can set up a popover to allow you to interact with other elements on the page while still being able to read it's content.

Popovers are most appropriate on larger screens and we’re most likely to encounter them in use cases such as:

Looking at those use cases, we can glean some requirements that make a good popover:

  1. Reusability: A popover should allow to pass a custom content to the popover.
  2. Dismissibility: A popover should be dismissible by clicking outside of the popover and escape button.
  3. Positioning: A popover should reposition itself when the screen edge is reached.
  4. Interaction: A popover should allow to interact with the content in the popover.

I created an example to refer to as we go through the process of creating a component.

View Demo

OK, now that we’ve got a baseline understanding of popovers and what we’re building, let’s get into the step-by-step details for creating them using popper.js.

Step 1: Create the BasePopover component

Let’s start by creating a component that will be responsible for initializing and positioning the popover. We’ll call this component BasePopover.vue and, in the component template, we’ll render two elements:

  • Popover content: This is the element that will be responsible for rendering the content within the popover. For now we use a slot that will allow us to pass the content from the parent component responsible for rendering our popover (Requirement #1: Reusability).
  • Popover overlay: This is the element responsible for covering the content under the popover and preventing user from interacting with the elements outside the popover. It also allows us to close the popover when clicked (Requirement #2: Dismissibility).
// BasePopover.vue
<template>
  <div>
    <div
      ref="basePopoverContent"
      class="base-popover"
    >
      <slot />
    </div>
    <div
      ref="basePopoverOverlay"
      class="base-popover__overlay"
    />
  </div>
</template>

In the script section of the component:

  • we import popper.js (the library that takes care of the popover positioning), then
  • we receive the popoverOptions props, and finally
  • we set initial popperInstance to null (because initially we do not have any popover).

Let’s describe what the popoverOptions object contains:

  • popoverReference: This is an object in relation to which the popover will be positioned (usually element that triggers the popover).
  • placement: This is a popper.js placement option that specifies the where the popover is displayed in relation to the popover reference element (the thing it is attached to)
  • offset: This is a popper.js offset modifier that allows us to adjust popover position by passing x- and y-coordinates.
import Popper from "popper.js"

export default {
  name: "BasePopover",

  props: {
    popoverOptions: {
      type: Object,
      required: true
    }
  },

  data() {
    return {
      popperInstance: null
    }
  }
}

Why do we need that? The popper.js library allows us to position the element in relation to another element with ease. It also does the magic when the popover gets to the edge of the screen an reposition it to be always in user’s viewport (Requirement #3: Positioning)

Step 2: Initialize popper.js

Now that we have a BasePopover component skeleton, we will add few methods that will be responsible for positioning and showing the popover.

In the initPopper method, we will start by creating a modifiers object that will be used to create a Popper instance. We set the options received from the parent component (placement and offset) to the corresponding fields in the modifiers object. All those fields are optional, which is why we first need to check for their existence.

Then, we initialize a new Popper instance by passing:

  • the popoverReference node (the element to which the popover is pointing: popoverReference ref)
  • the popper content node (the element containing the popover content: basePopoverContent ref)
  • the options object

We also set the preventOverflow option to prevent the popover from being positioned outside of the viewport. After initialization we set the popper instance to our popperInstance data property to have access to methods and properties provided by popper.js in the future.

methods: {
...
  initPopper() {
    const modifiers = {}
    const { popoverReference, offset, placement } = this.popoverOptions
  
    if (offset) {
      modifiers.offset = {
        offset
      }
    }
  
    if (placement) {
      modifiers.placement = placement
    }
  
    this.popperInstance = new Popper(
      popoverReference,
      this.$refs.basePopoverContent,
      {
        placement,
        modifiers: {
          ...modifiers,
          preventOverflow: {
            boundariesElement: "viewport"
          }
        }
      }
    )
  }
...
}

Now that we have our initPopper method ready, we need a place to invoke it. The best place for that is in the mounted lifecycle hook.

mounted() {
  this.initPopper()
  this.updateOverlayPosition()
}

As you can see, we are calling one more method in the mounted hook: the updateOverlayPosition method. This method is a safeguard used to reposition our overlay in case we have any other elements on the page that have absolute positioning (e.g. NavBar, SideBar). The method is making sure the overlay is always covering the full screen and prevent user from interacting with any element except the popover and overlay itself.

methods: {
...
  updateOverlayPosition() {
    const overlayElement = this.$refs.basePopoverOverlay;
    const overlayPosition = overlayElement.getBoundingClientRect();
  
    overlayElement.style.transform = <code>translate(-${overlayPosition.x}px, -${
      overlayPosition.y
    }px)`;
  }
...
}

Step 3: Destroy Popper

We have our popper initialized but now we need a way to remove and dispose it when it gets closed. There’s no need to have it in the DOM at that point.

We want to close the popover when we click anywhere outside of it. We can do that by adding a click listener to the overlay because we made sure that the overlay is always covering the whole screen under our popover

<template>
...
  <div
    ref="basePopoverOverlay"
    class="base-popover__overlay"
    @click.stop="destroyPopover"
  />
...
</template>

Let’s create a method responsible for destroying the popover. In that method we first check if the popperInstance actually exist and if it does we call popper destroy method that makes sure the popper instance is destroyed. After that we clean our popperInstance data property by setting it to null and emit a closePopover event that will be handled in the component responsible for rendering the popover.

methods: {
...
  destroyPopover() {
      if (this.popperInstance) {
        this.popperInstance.destroy();
        this.popperInstance = null;
        this.$emit("closePopover");
      }
    }
...
}

Step 4: Render BasePopover component

OK, we have our popover ready to be rendered. We do that in our parent component, which will be responsible for managing the visibility of the popover and passing the content to it.

In the template, we need to have an element responsible for triggering our popover (popoverReference) and the BasePopover component. The BasePopover component receives a popoverOptions property that will tell the component how we want to display it and isPopoverVisible property bound to v-if directive that will be responsible for showing and hiding the popover.

<template>
  <div>
    <img
      ref="popoverReference"
      width="25%"
      src="./assets/logo.png"
    >
    <BasePopover
      v-if="isPopoverVisible"
      :popover-options="popoverOptions"
    >
      <div class="custom-content">
        <img width="25%" src="./assets/logo.png">
        Vue is Awesome!
      </div>
    </BasePopover>
  </div>
</template>

In the script section of the component, we import our BasePopover component, set the isPopoverVisible flag initially to false and popoverOptions object that will be used to configure popover on init.

data() {
  return {
    isPopoverVisible: false,
    popoverOptions: {
      popoverReference: null,
      placement: "top",
      offset: "0,0"
    }
  };
}

We set popoverReference property to null initially because the element that will be the popover trigger does not exist when our parent component is created. We get that fixed in the mounted lifecycle hook when the component (and the popover reference) gets rendered.

mounted() {
  this.popoverOptions.popoverReference = this.$refs.popoverReference;
}

Now let’s create two methods, openPopover and closePopover that will be responsible for showing and hiding our popover by setting proper value on the isPopoverVisible property.

methods: {
  closePopover() {
    this.isPopoverVisible = false;
  },
  openPopover() {
    this.isPopoverVisible = true;
  }
}

The last thing we need to do in this step is to attach those methods to appropriate elements in our template. We attach the openPopover method to click event on our trigger element and closePopover method to closePopover event emitted from the BasePopover component when the popover gets destroyed by clicking on the popover overlay.

<template>
  <div>
    <img
      ...
      @click="openPopover"
    >
    <BasePopover
      ...
      @closePopover="closePopover"
    >
      ...
    </BasePopover>
  </div>
</template>

Having this in place, we have our popover showing up when we click on the trigger element and disappearing when we click outside of the popover.

Step 5: Create BasePopoverContent component

It does not look like a popover though. Sure, it renders content passed to the BasePopover component, but it does so without the usual popover wrapper and an arrow pointing to the trigger element. We could have included the wrapper component in the BasePopover component, but this would made it less reusable and couple the popover to a specific template implementation. Our solution allows us to render any template in the popover. We also want to make sure that the component is responsible only for positioning and showing the content.

To make it look like a popover, let’s create a BasePopoverContent component. We need to render two elements in the template:

  • an arrow element having a popper.js x-arrow selector needed for the popper.js to properly position the arrow
  • content wrapper that expose a slot element in which our content will be rendered
<template>
  <div class="base-popover-content">
    <div class="base-popover-content__arrow" x-arrow/>
    <div class="base-popover-content__body">
      <slot/>
    </div>
  </div>
</template>

Now let’s use our wrapper component in the parent component where we use BasePopover

<template>
  <div>
    <img
      ref="popoverReference"
      width="25%"
      src="./assets/logo.png"
      @click="openPopover"
    >
    <BasePopover
      v-if="isPopoverVisible"
      :popover-options="popoverOptions"
      @closePopover="closePopover"
    >
      <BasePopoverContent>
        <div class="custom-content">
          <img width="25%" src="./assets/logo.png">
          Vue is Awesome!
        </div>
      </BasePopoverContent>
    </BasePopover>
  </div>
</template>

And, there we go!

You can see the popover animating in and out in the example above. We’ve left animation out of this article for the sake of brevity, but you can check out other popper.js examples for inspiration.

You can see the animation code and working example here.

Let’s look at our requirements and see if we didn’t missed anything:

Pass? Requirement Explanation
Pass Reusability We used a slot in the BasePopover component that decouples the popover implementation from the content template. This allows us to pass any content to the component.
Fail Dismissibility We made it possible to close the popover when clicking outside of it. We still need to make sure we can dismiss the popover by pressing the ESC on the keyboard.
Pass Positioning That’s where popper.js solved an issue for us. It not only gave us positioning superpowers, but also takes care of repositioning the popover when it reaches the edge of the viewport.
Fail Interaction We have a popover popping in and out, but we do not have any interactions with the popover content yet. As for now, it looks more like a tooltip than popover and could actually be used as a tooltip when it comes to showing and hiding the element. Tooltips are usually shown on hover, so that’s the only change we’d have to make.

Darn, we failed interaction requirement. Adding the interaction is a matter of creating a component (or components) that will be placed in the BasePopoverContent slot. In the example, I created a very simple component with a header and text showing a few Vue style guide rules. By clicking on the buttons, we can interact with the popover content and change the rules, when you get to the last rule the button changes its purpose and serve as a close button for the popover. It’s a lot like the new user welcome screens we see in apps.

We also need to fully meet the dismissibility requirement. We want to hit ESC on the keyboard to close the popover in addition to clicking anywhere outside it. For kicks, we’ll also add an event that proceeds to the next Vue style guide rule when pressing Enter.

We can handle that in the component responsible for rendering the popover content using Vue event key modifiers. To make the events work we need to make sure that the popover is focused when mounted. To do that we add a tabindex attribute to the popover content and a ref that will allow us to access the element in the mounted hook and call focus method on it.

// VueTourPopoverContent.vue

<template>
  <div
    class="vue-tour-popover-content"
    ref="vueTourPopoverContent"
    tabindex="-1"
    @keydown.enter="proceedToNextStep"
    @keydown.esc="closePopover"
  >
...
</template
...
<script>
export default {
...
  mounted() {
    this.$refs.vueTourPopoverContent.focus();
  }
...
}
</script>

Wrapping up

And there we have it: a fully functional popover component we can use anywhere in our app. Here are a few things we learned along the way:

  • Use a popover to expose a small amount of information or functionality. Remember that the content will disappear when user is finished with it.
  • Consider using popovers instead of temporary views like sidebars. Popovers leave more space for content and are only temporary.
  • Enable a closure behavior that makes sense based on the popover’s function. A popover should be visible only when needed. If it allows user to make a choice, close the popover as soon as the user makes a decision.
  • Position popovers onscreen with care. A popover’s arrow should always point directly to the element that triggered it and should never cover the trigger element.
  • Display one popover on screen at a time. More than one steals attention.
  • Take care of the popover size. Prevent making it too big but bear in mind that proper use of padding can make things look nice and clean.

If you don't want to dig too deep into the code and you just need the component as it is, you can try out the npm package version of the component.

Hopefully you will find this component useful in your project!

The post Reusable Popovers to Add a Little Pop appeared first on CSS-Tricks.

Collective #543








C543_reactlayout

React Layouts

Grab-and-go layouts for React including code examples for Rebass, Theme UI, or Emotion. By Brent Jackson.

Check it out


C543_ruffle

Ruffle

Ruffle is an Adobe Flash Player emulator written in the Rust programming language. Ruffle targets both the desktop and the web using WebAssembly.

Check it out














C543_svelte

Svelte tutorial

A tutorial that will teach you everything you need to know to build fast, small web applications easily with Svelte.

Check it out


C543_null

Introducing nushell

The introduction of a new shell, written in Rust that draws inspiration from the classic Unix philosophy of pipelines and the structured data approach of PowerShell.

Read it


Collective #543 was written by Pedro Botelho and published on Codrops.

Common Misconceptions About Smart Contracts and Development Services

One of the most promising developments in blockchain is the use of smart contracts. Smart contracts are self-executing contracts in which the conditions of the agreement between buyer and seller are clearly written into lines of code. The code and the agreements enclosed within are stored across a distributed, decentralized blockchain network. Simply, smart contracts are computerized transaction protocols that execute the terms of a contract.

The probable advantages of smart contracts have seen businesses paying huge attention to smart contract development services. Unfortunately, the big spike in interest has also promoted the growth of some major misconceptions about smart contract development services: what they do, what their products are meant for, and what their products can do?

Using Apache Solr in Production

Solr is a search engine built on top of Apache Lucene. Apache Lucene uses an inverted index to store documents(data) and gives you search and indexing functionality via a Java API. However, to use features like full text, you would need to write code in Java.

Solr is a more advanced version of Lucene’s search. It offers more functionality and is designed for scalability. Solr comes loaded with features like Pagination, sorting, faceting, auto-suggest, spell check, etc. Also, Solr uses a trie structure for numeric and date data types e.g. there is a normal int field and another tint field, which represents the trie int field.

How to Fix the 401 Error in WordPress (6 Solutions)

Are you seeing a 401 error on your WordPress site?

It is one of the most confusing WordPress errors that could lock you out of your WordPress website.

The 401 error has multiple names including Error 401 and 401 unauthorized error. These errors are sometimes accompanied by a message ‘Access is denied due to invalid credentials’ or ‘Authorization required’.

In this article, we will show you different solutions to easily fix the 401 error in WordPress. We will also discuss what causes it, and how to avoid it in the future.

Fixing the 401 error in WordPress

What Causes the 401 Error in WordPress?

The 401 error in WordPress is caused by improper authentication while communicating with the WordPress hosting server.

For example, if you have password-protected your WordPress admin folder, then not entering a password will show a 401 error page on WordPress login and admin pages.

401 Authorization failed error

However, in some cases you may see this error even without adding any special password protection to your website.

For example, WordPress security plugins can lock down your admin area during a brute force attack.

Another common cause of this error is security measures taken by hosting companies to protect your WordPress website. These security measures start showing this error when your WordPress login pages are accessed excessively.

Mostly, 401 error appears on WordPress admin and login pages. However, in some cases, it could appear on all pages of your website.

You’ll need to troubleshoot exactly what’s causing the error and then fix it.

That being said, let’s take a look at different solutions to quickly fix the 401 error in WordPress.

1. Temporarily Remove Password Protection on WordPress Admin

If you have password-protected your WordPress admin directory, then this could be the solution you need.

You may have forgotten your admin directory password, or your server configuration may have changed.

Head over to your WordPress hosting control panel and locate the Directory Privacy or Password Protected Directories icon.

Our screenshot is showing our Bluehost hosting account, but most hosting panels will have this option.

Directory privacy

Once you open it, you will see all the files and folders on your hosting account. Browse to your wp-admin directory and select it by clicking on the name.

The control panel will now display its password protection settings. Simply uncheck the box next to ‘Password protect this directory’ option and click on the Save button.

Disable password protection

After that, click on the Go Back button and scroll down to the bottom of the page. From here you need to delete the username you used to login to your password-protected directory.

You have successfully disabled password protection for your WordPress admin directory. You can now try to log into your WordPress site.

If everything works normally, then you can go ahead and enable password protection for your WordPress admin area by creating a new user and password.

2. Clear Firewall Cache to Solve 401 Error in WordPress

If you are using a cloud-based WordPress firewall service like Sucuri or Cloudflare, then 401 error may be triggered when the firewall fails to communicate with your website.

Purge Cache in Sucuri Firewall

If you are using Sucuri, then login to your Sucuri dashboard and visit the ‘Performance’ page. From here you need to switch to the ‘Clear Cache’ tab and then click on the ‘Clear cache’ button.

Sucuri clear cache

Purge Cache in Cloudflare

If you are using Cloudflare, then you need to login to Cloudflare dashboard and go to the ‘Caching’ section. From here you need to click on the ‘Purge everything’ button to clear all cache.

Cloudflare clear cache

After clearing your firewall cache, go ahead and clear your browser cache or WordPress cache as well. See our complete guide on how to clear cache in WordPress for more details.

3. Deactivate All WordPress Plugins

A misbehaving or poorly configured WordPress plugin can also trigger the 401 error. You will need to temporarily deactivate all WordPress plugins to find out if the error is caused by one of them.

You can simply deactivate WordPress plugins from inside the admin area by visiting the plugins page.

Deactivate all plugins

However, if you cannot access the WordPress admin area, then you’ll need to use FTP to deactivate all WordPress plugins.

Simply connect to your WordPress site using an FTP client. Once connected go to /wp-content/ folder and rename the plugins folder to plugins.deactivated.

Deactivate all WordPress plugins via FTP

Renaming the plugins folder will deactivate all WordPress plugins.

You can now visit your WordPress website’s admin area and try to log in. If everything works fine, then this means that one of the plugins was causing the issue.

Now you need to switch back to FTP client and once again rename the plugin’s folder to just plugins.

Next, return to the WordPress admin area and go to the plugins page. You can now activate each plugin one at a time until you start seeing the 401 error again.

This will help you find the plugin causing the issue. Once you found the plugin, you can contact plugin’s support or find an alternative plugin.

4. Switch to a Default WordPress Theme

Sometimes a function inside your WordPress theme may trigger the 401 error on your website. To find out, you need to temporarily switch to a default WordPress theme.

Default themes are made by the WordPress team and are shipped with the default WordPress install. These themes include Twenty Nineteen, Twenty Seventeen, Twenty Sixteen, and more.

First, go to Appearance » Themes page. Now if you have a default WordPress theme installed, then you can go ahead and activate it.

Activate default WordPress theme

If you don’t have a default theme installed on your site, then you need to install and activate it. See our guide on how to install a WordPress theme for instructions.

After switching the theme, you can go and test your website. If everything works OK now, then this means your theme was causing the 401 error.

You can report the issue to the theme developer, they may be able to help you fix it. If that doesn’t work, then you can permanently change your WordPress theme.

5. Reset WordPress Password

WordPress hosting companies can sometimes block access to wp-admin and login pages if someone is repeatedly trying to enter a password.

In that case, your access will be temporarily blocked, and you can try after a few minutes.

However, instead of guessing your password it would be best to recover forgotten WordPress password.

Lost password

WordPress will send you an email with a link to change your password. The problem with this method is that sometimes WordPress may fail to send emails.

If you don’t get the email, then don’t worry. You can also reset the WordPress password using phpMyAdmin.

6. Contact WordPress Hosting Provider

Many WordPress hosting companies automatically detect suspicious activity on a WordPress website and block access to prevent attacks.

These security precautions sometimes only affect the WordPress admin area, and your login page may become inaccessible for a while.

Too many login attempts

However, if it does not return back to a normal state, or you are seeing 401 error on all your site pages, then you need to contact your WordPress hosting provider immediately.

Their staff will be able to check the access and error logs to fix the issue for you.

For future prevention, you can follow our complete WordPress security guide to protect your WordPress admin area from unauthorized access.

We hope one of these solutions helped you fix the 401 error in WordPress. You may also want to see our complete WordPress troubleshooting guide with step by step instructions to fix common WordPress issues by yourself.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Fix the 401 Error in WordPress (6 Solutions) appeared first on WPBeginner.