Free Online JavaScript for WordPress Conference to Feature “Headless WordPress” Track, July 12

The second edition of the JavaScript for WordPress conference will be streamed online July 11-13, 2019. Based on the success of the 2018 event, which had 1,200 attendees watching live, organizer Zac Gordon decided to expand the event to feature three free days of talks, workshops, and a contribution day focused on JavaScript and WordPress.

The conference will run from July 11-13, and includes educational content for a whole range of Javascript capabilities, from beginner to advanced:

  • Day 1 – Workshops for JavaScript Beginners
  • Day 2 – Three Tracks of Intermediate and Advanced Talks (plus One Non-Technical Track)
  • Day 3 – Contributor Day to help improve the JavaScript-related documentation for WordPress

Gordon has published the finalized schedule for the 36 sessions and speakers that will be streamed on Friday, July 12. This year the event will feature one track devoted to exploring topics surrounding “headless WordPress,” an approach that eschews WordPress’ traditional architecture in favor of decoupling the front and backends, allowing developers to integrate different stacks. The track includes presentations like A React Theme in 30 Min, SEO for Headless WordPress Themes, Gatsby & WordPress, and Headless E-Commerce with BigCommerce. Other tracks feature more general JavaScript and Gutenberg topics.

Thanks to more than a dozen sponsors, registration is free, but viewers must sign up on the conference website in order to attend online.

The (Developer’s) Growth Model

I really like the post "The Designer’s Growth Model" by Dennis Hambeukers. Dennis just invented this model, but it's based on some existing ideas and it all rings true for me. Let me try to summarize the five stages as he lays them out for designers.

  1. Producers: You learn how to design. You learn fundamentals, you practice, you get good at doing design work and producing beautiful functional things. Then you have this "crisis" moment before the next stage where you find you can't do enough work on your own and that you need to be able to scale your efforts, with multiple human beings and working on systems — and that's an entirely new skill.
  2. Architects: Now that you've succeeded in scaling through team building and systems thinking, the next crisis moment is that that this the work still might be isolated, and too focused on internal thinking. To grow, you'll need to work with people outside the design bubble, and understand problems more holistically.
  3. Connectors: Now that you've succeeded in being more collaborative across an entire organization and being a real problem solver, the next crisis moment is when everything becomes organizationally complicated. Just delivering products isn't enough, because you're involved deeply across the organization and you're responsible for the success of what is delivered.
  4. Scientists: Now, you measure everything. You know what works and what doesn't because you test it and can prove it, along with using all the skills you've honed along the way. Your next crisis is figuring out how to translate your work into actual change.
  5. Visionaries: You're a leader now. You have an understanding of how the whole organization ticks, and you are a force for change.
From The Designer's Growth Model

I think this can applies just as well to web development, with very little change. I can relate in many ways. I started plucking away at building sites alone. I found more success and was able to build bigger things by working with other people. At some point, it was clear to me that things don't revolve around development. Development is merely one part of a car that doesn't drive at all without many other parts. Even today, it's clearer to me that I can be more effective and drive more positive change the more I know about all of the parts.

Not that I've completed my journey. If I had to map myself directly onto this model, I'm probably barely on step three — But a model is just a model. It's not meant to be a perfect roadmap for everybody. Your own career path will be twistier than this. You might even experience bits from all the levels in different doses along the way.

The post The (Developer’s) Growth Model appeared first on CSS-Tricks.

GIPHY Announces Platform Enhancements and New SDK

GIPHY, the provider of a database and search engine for animated GIFs, has announced a new SDK that provides third-parties access to GIFs, stickers, and additional new content like GIPHY Emoji and Text. With this release, the company hopes to accelerate integration for third-party app developers.

Data Gathering In The Wild: A Hands-On Example

Introduction

This tutorial provides guidance on gathering data through web-scraping. However, to demonstrate the real-life issues with acquiring data, a deep-dive into a specific, complicated example is needed. The problem chosen, acquiring the geographic coordinates of gas stations in a region, turns into an interesting math problem that, eventually, involves "sacred geometry".

Application Programming Interfaces (API's)

No matter which Data Science Process model you subscribe to, actually acquiring data to work with is necessary. By far the most straightforward data source is a simple click-to-download in a standardized file format so you can utilize a parsing module in your favorite language; for example, using Pythons pandas.DataFrame.from_csv() function parses a .csv into a DataFrame object in one line .

Unfortunately, its not always this easy. Real-time data, like the stream of 6000 tweets per second, cant simply be appended to an infinitely-growing file for downloading. Furthermore, what happens when the dataset is extremely large? Smaller organizations might not be able to provide several-gigabyte downloads to each user, and if someone only needs a small subset, then giving them everything would be inefficient.

In general, these problems are solved through an Application Programming Interface (API). APIs are a programmers way of interfacing with an application, or in the context of this article, the means by which we will acquire data. Here's a great, concise resource on why APIs are needed .

One quick note. APIs are typically different from one another, so its not necessary to learn every detail, and are you not expected to. Youll gain experience by using them.

To see how data gathering can look in practice, this article will demonstrate a hands-on approach to find and deal with an API Ive never used before.

Find the Data Source

Before any searching, specify your intentions. For our case, the goal is to get the location, latitude and longitude, of gas stations in the United States in a Python pandas.DataFrame. Notably, it took some googling life hacks to find a free, comprehensive source that meets our requirements. Its a little old, but myGasFeed.com has the data were looking for.

myGasFeed was an API a developer would use to get the local gas prices of an area for a website, mobile app, etc.

Read Rules & Guidelines

Anytime you acquire data programmatically, look for any rules somewhere on the website. They might require you to register for an access key, limit requests, cite them, or follow some uncommon procedure for access. As a data-seeker, you should read pay attention to guidelines for practical, ethical, and potentially legal reasons. Often, the practical part is a tutorial for accessing the data.

In myGasFeeds about page, they describe its features and technology, but also inform that the data is freely accessible through their API. It also refers you to the API section that has directions on how to use a generic API key to access the data.

Hands-on Data Acquisition

Start Small, Then Scale

We need to write a program that interfaces with the myGasFeed application, that is, we need to use its API. In myGasFeed's API requests directions, they provide a developer API key to use outside of a production environment. There's also a template URL to request all the gas stations in a certain radius, centered at a specified latitude/longitude. The response will be in JSON format.

Our program must generate the URL for a query, make a request for the data at that URL, then parse the response into a pandas.DataFrame.

First, a function to generate the URL requires a starting location, query radius, type of fuel, sorting criterion, and the API key. New Orleans, Louisiana will be our first example, but you can use any U.S. location you'd like. Just be sure that you put negatives in front of latitude if given in degrees South, and longitude if given in degrees West.

def make_url(lat, long, distance, fuel_type, sort_by, key="rfej9napna"):
    url = "http://devapi.mygasfeed.com/"
    url += "stations/radius/%s/%s/%s/%s/%s/%s.json?callback=?"%(lat, long, distance, 
                                                                fuel_type, sort_by, key)
    return url

nola = (29.9511, -90.0715)
gas_stations_url = make_url(*nola, 40, "reg", "price")

The content of the gas_stations_url is the string
'http://devapi.mygasfeed.com/stations/radius/29.9511/-90.0715/40/reg/price/rfej9napna.json?callback=?'
and represents the URL to request all gas station data within 40 miles of the center of New Orleans. Note that "reg" corresponds to regular gas, "price" means the gas stations are sorted by price, and the requested radius must be less than 50 miles.

Next, we have to actually request the data using the well-named "requests" module. With it, we can send a "GET" request to a specified URL with the requests.get function. The myGasFeed API allows for GET requests, so it knows how to send data back to our variable.

import requests

response = requests.get(gas_stations_url, 
                        headers={"user-agent":"Jeffrey Lemoine, jeffmlife@gmail.com"})

Notice that the headers parameter has my name and email. This is not necessary but is good practice; for someone checking for malpractice, a name and email might convince them you're not a bot.

The response variable is a requests.models.Response object. The data we requested is in the text attribute. Let's check by printing the first 100 characters in the response text.

print(response.text[0:100])

prints

 '?({"status":{"error":"NO","code":200,"description":"none","message":"Request ok"},"geoLocation":{"country_short":null,"lat":"29.9511","lng":"-90.0715","country_long":null,"region_short":null,"region_long":null,"city_long":null,"address":null},"stations":[{"country":"United States","zip":"70458","reg_price":"N\\/A","mid_price":"N\\/A","pre_price":"N\\/A","diesel_price":"3.85","reg_date":"7 years ago","mid_date":"7 years ago","pre_date":"7 years ago","diesel_date":"7 years ago","address":"3898 Pontch'

Though messy-looking, it appears we received a valid response since there were no error codes and the request is said to be "ok." However, the JSON response is wrapped between '?( and )'. This can be solved with a short function to parse responses into a valid JSON string.

import json 
def parse_response(response):
    # get text 
    response = response.text

    # clean response text
    # initial response is wrapped in '?(...)'
    response = response[2:][:-1]

    # make json 
    data = json.loads(response)["stations"]

    return data

json_data = parse_response(response)

json.loads requires the JSON standard, so trimming the response is necessary to prevent a JSONDecodeError. Since we are interested in gas station locations, only the "stations" values of our response are required. The return type of parse_response is a list of same-keyed dict's. For example, a random element in json_data has the dict

    {'address': '1701 Highway 59',
     'city': None,
     'country': 'United States',
     'diesel': '1',
     'diesel_date': '7 years ago',
     'diesel_price': 'N/A',
     'distance': '29.2 miles',
     'id': '73100',
     'lat': '30.374134',
     'lng': '-90.054672',
     'mid_date': '7 years ago',
     'mid_price': '3.49',
     'pre_date': '7 years ago',
     'pre_price': '3.61',
     'reg_date': '7 years ago',
     'reg_price': '3.29',
     'region': 'Louisiana',
     'station': None,
     'zip': '70448'}

Everything checks out (except that the data is outdated 7 years).

Thankfully, a list of dicts can be transformed into a DataFrame in one line, but some more processing is required.

from pandas import DataFrame, to_numeric

gas_stations = DataFrame(json_data)[["lat","lng"]]
gas_stations.columns = ["lat","lon"]
gas_stations["lon"] = to_numeric(gas_stations["lon"])
gas_stations["lat"] = to_numeric(gas_stations["lat"])

In that same line, only latitude and longitude columns were included. Then, the columns were renamed because I don't like "lng" as a shorthand for longitude. The coordinate values were converted from strings to numbers. The final DataFrame looks like:

Screen_Shot_2019-06-26_at_3_57_34_PM.png

and has 465 rows.

Voil! We now have the geological coordinates of gas stations within 40 miles of the center of New Orleans. But wait, we only used a 40-mile radius and are limited by 50 miles, what if you wanted more? The next section will take our small example and scale it to any radius.

A Nave Solution

Here's our problem: we can only request locations less than 50 miles from a specified center in a single query but want locations more than 50 miles away. We also want to minimize queries to be respectful to myGasFeed.com, and because we programmers strive for optimal code. I figured that we should choose different centers such that there is minimal overlap between radii but still encompass all space. Since we're dealing with circles it's impossible to avoid some overlap, but we can try to minimize it.

Let's use Denver, Colorado as our new center and draw a circle with a 49-mile radius around it.

denver-1.png

Our previous code can already find all the gas stations within that radius. Next, we will figure out a way of generating centers that increase coverage. The nave, but simplest, approach is to expand the centers outward in a gridlike fashion. The distance between centers can be the circle radius (49 miles) times the Euclidean distance on a grid. Each of those centers is a separate query to myGasFeed, so the data will need to be consolidated.

Now, let's write the function to generate the nave grid before we make things more complicated. The procedure is simple. First, make an n x n grid. Then, for each square in the grid (there will be n^2 of them) calculate how far it is from the origin, as well as its angle with respect to True north. With a starting coordinate (Denver) and the distance and direction of a grid point, we can calculate where on Earth that point would be.

from numpy import *
from numpy.linalg import det 
from itertools import product 
from geopy.distance import geodesic, Point

def angle_between(v1, v2):
    """Calculates angle b/w v1 and v2 in degrees"""

    # dot product ~ cos()
    dot_product = dot(v1, v2)     
    # determinant ~ sin()
    determinant = det([v1, v2])    

    # tan = sin()/cos() ==>  = arctan(sin()/cos())
    angle_radians = arctan2(determinant, dot_product)  

    # Convert to degrees in [0,360)
    return (angle_radians * (180 / pi)  + 360) % 360

def expand(center, radius, n_radii):

    # Define starting point
    start = Point(*center)

    # Generate square grid of shape (n_radii x n_radii)
    rng = arange(start = -n_radii, 
                 stop = n_radii + 1, 
                 step = 1, 
                 dtype = float64)
    grid = list(product(rng, rng))  

    # Remove center square [0,0]; no calc. required
    grid.remove(grid[len(grid)//2])

    # Reference direction 
    true_north = array([0,1], dtype=float64)

    new_centers = [center]
    for square in grid:
        # Calculate clockwise angle of square center wrt.
        # true North in degrees [0,360)
        bearing = angle_between(square, true_north)

        # Calculate distance to travel 
        euclidean = lambda p1, p2: sqrt(sum((p1-p2)**2))
        dist = radius * euclidean(square, array([0,0]))

        # Find coord. of point *dist* miles away at *bearing* degrees
        # Using geodesic ensures proper lat, long conversion on ellipsoid
        lat, long, _ = geodesic(miles = dist).destination(point=start, bearing=bearing)
        new_centers.append((lat, long))

    return new_centers

points = expand(denver, 49, 1)

Admittedly, my first version of expand code could only expand once to produce a center at every 45, or a 3 x 3 lattice of points. When attempting to expand to additional radii, calculating the angle with respect to true north (i.e., the bearing) had to be generalized, which is what angle_between addresses; if you're interested in how angle_between works, check out its linear algebra here.

As for the details of expand, it's sufficient to read its comments for an understanding. Long story short, points contains a 3 x 3 nave grid of coordinates centered at Denver, scaled by 49 miles. If we plot these coordinates with their respective circles, we can see substantial overlap.

denver3x3.png

While the nave grid indisputably achieves more-than-full coverage of the area, it's visually apparent that the grid is too tightly-spaced. Thus, there should be a number we can multiply the grid space with to maximally separate the circles without missing any space. But which number?

The Optimal Grid

The solution for how to best distance the grid points for minimally sufficient circle overlap was surprisingly quite simple. Notice that within the center circle of the nave 3x3 grid image (above) there was a square. Its diagonal is equal to the diameter of the circle; therefore, it's the largest possible square that can fit inside a circle with that radius. Interestingly, when I put the squares inside the other circles, even they overlapped inside the nave grid.

We can visualize the inner square overlap through the following color scheme: green for no overlap, yellow for two squares, and red for four squares.

naive-overlap.png

What happens if we were to minimize the overlap between maximal squares such that all inner space is still covered? It would just be a grid of squares and, under our previous visualization, they'd all be green squares. Most importantly, it turns out that organizing circles in a grid according to their maximal inner square is the optimal grid for minimizing circle overlap (and maintaining full coverage).

I will now informally prove this. Imagine a grid of squares. At each corner, there are four squares surrounding it making a cross. If you were to nudge the bottom left (blue) square towards the center (red arrow), additional overlapping occurs; if you were to move the square away from the center (green arrow), then you'd lose full coverage.

proof.png

While it's hard to see, moving the blue square away from the center really would lose coverage. To grasp this, look at the black line between the blue and the orange circle. There is only a single point, at the center of the grid on their square's diagonal, in which they touch this line. This is equally true for the rest of the circles. Therefore, any movement away from the circle would not be full coverage.

Even more, this logic applies to the whole grid because traveling along the green arrow for one center, is a red arrow at another (except for the edges).

Implementing the Solution

To actually implement the optimal grid, all we need to do is find the scaling factor for our grid space. Recall that the nave grid's number, the circle's radius, was too small. Also, we already know the radius of the circle. We are missing the maximal square's dimension, x, which we've shown is the scaling factor we require.

math.png

The optimal grid's scaling factor is the square root of two (~1.414) times the radius of the circle. It separates our grid space such that the circles of that radius have their maximal squares lined up, and their overlap minimized.

All we have to do to our original expand function is multiply the distance to travel by the square root of two:

...
dist = euclidean(square, array([0,0])) * radius * sqrt(2)
...

Now, a single expansion looks like:
denver-3x3-optimal.png

The Final Scrape

The expand function only generates coordinate centers that we need to query. A new function is required that can query all the new points. Although, this time we need a way to deal with failed requests, as well as the rate at which we request.

The procedure is as follows. First, generate the list of URL's to process from the points returned from expand, and put them in the list to track whether or not they've been processed. When that list is empty, return the responses. All that "process" means is to use our earlier function, parse_response, and store its result.

from time import sleep 

def query(points):
    # List of URL's that need to be processed
    to_process = [make_url(*point, 49, "reg","price") for point in points]

    # Current url to process 
    url = to_process.pop()

    data = {}
    while(to_process) :     
        response = requests.get(url, 
                                headers={"user-agent":"Jeffrey Lemoine, jeffmlife@gmail.com"})

        # try to parse, except when bad response
        try: 
            json_data = parse_response(response)
        except (json.JSONDecodeError):
            # Push failed url to front 
            to_process = [url] + to_process
            continue 
        finally: 
             url = to_process.pop() 

            # Limit requests to one per sec.
            sleep(1)

        print("\r%d left to process"%(len(to_process)), end="")
        data[url] = json_data

    return data 

json_data = query(points)

Inside the while loop, there is a try-catch-finally statement that ensures proper handling of a bad request; if there is a bad request, the response is not our JSON data, so it would throw a json.JSONDecodeError when trying to parse it. When a request fails, we don't want the loop to iterate and try that same URL over and over again, so we push the failed URL to the front of to_process. Finally, (that is, after either a successful or a failed request) pop the end of to_process, and sleep one second so that we are respectful to the server.

And, with just a little more processing...

from itertools import chain

stations_list = list(chain.from_iterable(json_data.values()))

gas_stations = pandas.DataFrame(stations_list)[["lat","lng"]]
gas_stations.columns = ["lat","lon"]
gas_stations["lon"] = to_numeric(gas_stations["lon"])
gas_stations["lat"] = to_numeric(gas_stations["lat"])
gas_stations = gas_stations.drop_duplicates().dropna()
gas_stations.index = range(len(gas_stations))

We are back to a DataFrame with our data in the proper numerical types. Since the circle's we generated had overlap, a drop_duplicatescall is necessary to get unique data points.

Conclusion

The experience gained from this procedure is important to highlight. Not all data acquisition is as straightforward as a click-to-download, or as convoluted as the type seen in this article. Often, you are not given instructions on how to do get the data you need in the format you need it, so some hacking is required to meet your needs. On the other hand, you should ask around, or email a data provider, before you spend hours trying to get an API to work the way you'd like; there might already be a way, so don't "reinvent the wheel" unless you have to (or unless you're writing a tutorial on gathering data).

Notes

It turns out that the problem we formulated, minimizing overlap between a grid of circles, is already an artistic concept (specifically in "sacred geometry"), with its own Wikipedia page. Furthermore, I should emphasize that the optimal solution we found is only for a square lattice. There are other constructions with different grids. For example, a triangular grid with 19 circles :

Flower_of_life_0577-19-circle_svg.png

I encourage you to figure out what the optimal configuration of circles with any underlying grid space (I actually don't know yet), and see if you can code it up.

If you like the images I generated, I used plotly's mapbox for plotting points on a map, and Lucidchart for drawing on them.

In Case You Missed It – Issue 27

In Case You Missed It Featured Image
photo credit: Night Moves(license)

There’s a lot of great WordPress content published in the community but not all of it is featured on the Tavern. This post is an assortment of items related to WordPress that caught my eye but didn’t make it into a full post.

Carol Gann Awarded the 2019 Kim Parsell Memorial Scholarship

Carol Gann, who is a Meetup coordinator in the WordPress Orlando Community, has been awarded the Kim Parsell Memorial Scholarship. The scholarship is named after Kim Parsell who passed away in 2015 but her impact on the WordPress community is still felt today.

“My proudest contribution to the WordPress open source project is training small business owners and bloggers to be comfortable and conversant with their own WordPress websites. WordPress empowers people. Many end users of WordPress are not technically minded. As a WordPress Meetup co-organizer, I contribute to the coffee help desk, assisting others in finding solutions to their WordPress problems. I also host another help desk opportunity, ‘Coffee With Carol,’ to empower WordPress users,” Gann said.

I can tell from the quote above that Kim and Carol would get along well as Kim was also the type of person who would do what she could to help others.

GravityView Diversity Grant to Attend PressNomics 6

The folks over at GravityView are offering a grant to recognize the challenges certain groups of people face succeeding in technology fields and to promote inclusivity and diversity. The grant includes a ticket to PressNomics 6, a flight to Tuscon, AZ, lodging, transportation via a Lyft gift card, and a one-on-one business consultation with Zak Katz, Co-founder of GravityView. The deadline to apply is 11:59 PM MDT on June 30, 2019.

10up OpenSource Project Scaffolding Suite

10up has released a project scaffolding suite that includes a WordPress starter theme, starter plugin, and NPM package. The purpose of the suite is to streamline repetitive tasks, encourage community contributions, and provide a starting point that includes 10up’s engineering best practices.

End to End Tests Added to Core

Introducing the WordPress e2e tests

WP Tavern Turns 10 Years Old

I was looking back through the Tavern archives and realized that this past January, WP Tavern turned 10 years old. It’s been quite a journey and it’s not over yet. Check out the first post I published on the Tavern announcing its opening.

Matt Mullenweg Announces That Automattic Is Sponsoring Jill Binder’s Work

Diversifying WordPress

John James Jacoby Releases A Plugin That Cryptographically Signs Posts

John James Jacoby has released a small plugin on GitHub that cryptographically signs posts. The plugin splits the content of posts in words and then stenographically inserts zero-width characters between them. These characters then form a unique, invisible pattern that can be used to detect plagiarised content. This plugin sounds like it would pair well with WordProof.

What does DXP Mean?

I asked on Twitter what does DXP or Digital Xperience platform mean? It comes across as fancy marketing lingo. Here are a few of the responses I received.

Matt Mederios – ‘DXP’ or in other words, how we want our customers to experience WordPress in our controlled ecosystem. All your solutions in one place, possibly to the point you don’t recognize it’s WordPress.

Stephen CroninDXP is an enterprise thing and has been around for ages in various guises. WordPress is not listed by Gartner, but Drupal and SharePoint are, along with other enterprise CMS’s. If people want to create DXPs out of WordPress, more power to them.

Karim MarucchiForget the buzz, large sites are moving past ‘just’ content, no one product (not #AEM not #Sitecore) will ever be perfect for all the existing & new features that are popping up ‘monthly’, so with #OpenSourse we all can make the most open easy/most compatible /cheap framework that will help the #enterprise manage/customize/blend all the ways you need to interact with your clients. And yes, the good Hosts, are staying out of trying to be all things.

Thanks to these three, the meaning of DXP is a bit more clear.

WordCamp EU Organizing Team Issues Apology

There were some things that took place during the WordCamp EU afterparty that didn’t sit well with some people. The WordCamp EU organizing team explained what happened and issued an apology for the mistakes that were made.

Torque Interviews Marcel Bootsman

Doc Pop of Torque caught up with Marcel Bootsman to talk about his walking journey to Berlin. Ironically, the interview occurs as they’re walking around.

That’s it for issue twenty-seven. If you recently discovered a cool resource or post related to WordPress, please share it with us in the comments.

Middle Managers, the Flow of Ideas, and Innovation

It really does take a village, not just upper management.

Few people in an organization have been the focus of so much attention in innovation circles as middle managers. Depending on your point of view, they are seen as either an essential conduit by which information flows, or a barrier to the spread of ideas and knowledge.

Indeed, it’s a topic I myself touched upon when I looked at some new research from Wharton’s Ethan Mollick on the topic. Mollick suggested that middle managers are especially important in industries that require innovative employees such as biotech, computing, and media.

Hierarchy = The Matrix: We Don’t See or Question It

The Matrix is real, my friends.

The hierarchy is at the very center of our lives. We have experienced it in our school years and later when working in organizations. It’s existence and function is tacit in our understanding of reality.

At the Agile Alliance Change Agents workshop, it became clear to me that the existence of hierarchy was greatly influencing the sessions. I sensed that there were two broad themes that emerged:

Ingesting Data From Apache Kafka to TimescaleDB

The Glue Conference (better known as GlueCon) is always a treat for me. I've been speaking there since 2012, and this year I presented a session explaining how I use StreamSets Data Collector to ingest content delivery network (CDN) data from compressed CSV files in S3 to MySQL for analysis, using the Kickfire API to turn IP addresses into company data. The slides are here, and I'll write it up in a future post.

As well as speaking, I always enjoy the keynotes (shout out to Leah McGowen-Hare for an excellent presentation on inclusion!) and breakouts. In one of this year's breakouts, Diana Hsieh, director of product management at Timescale, focused on the TimescaleDB time series database.

Agile Isn’t Just for the Tech Sector Anymore

Even doctors can learn a thing or two hundred.

These days, Agile has grown beyond the IT sector and is being successfully applied in marketing, sales management, logistics, corporate governance, and more. 

Agile encompasses both the culture and the methodology that allows companies to adapt to the changes in the most effective manner.

BuddyPress 5.0 to Introduce BP REST API, First Beta Due Mid-August

BuddyPress 5.0 is on track to introduce a new BP REST API, which has been in development as a feature plugin on GitHub since 2016. Contributors plan to merge the API with 14 endpoints for popular components like activity updates, groups, members, private messages, and extended profile fields. Another eight endpoints for blogs, friends, and other features, are planned to ship in BuddyPress 6.0.0.

The first major use of the BP REST API inside BuddyPress is a new group management interface that enables administrators to quickly search for specific members to promote, demote, ban, or remove. BuddyPress contributor Mathieu Viet shared a demo of what users can expect from the new interface on both the frontend and the backend.

Contributors are still discussing how to include the BP REST API into the BuddyPress plugin package, whether they should continue maintaining it on GitHub until all the endpoints are finished and include it during the BuddyPress plugin’s build process, or merge it into BuddyPress core and use Trac. GitHub is more convenient for development but some expressed concerns about fragmenting the history of the API’s development on two platforms.

BuddyPress lead developer Boone Gorges said in a recent dev chat that shipping the BP REST API without documentation is a blocker. Contributors are now working on a new documentation site. Since version 5.0.0 will be more of a developer-oriented release, Viet suggested contributors take the opportunity to set up developer.buddypress.org with similar resources as WordPress has on its DevHub project. He is looking for feedback on his proposal for automatically generating the documents from the REST schemas of the API’s endpoints and further customizing it for integration into the broader developer.buddypress.org site.

BuddyPress contributors are targeting August 15 for releasing 5.0.0 beta 1 and will discuss a date for RC further down the road. Regular dev chat meetings have resumed and are now happening every other Wednesday at 19:00 UTC in the #BuddyPress Slack channel.

The Democratization of Innovation

I had the opportunity to meet with Ross Mason, Founder, MuleSoft following his keynote on The Democratization of Innovation.

Companies today are innovating at such a pace no one is able to do it all internally. The next generation of the web is APIs and capabilities. Today, there are tens of thousands of APIs to build upon. There's a great opportunity for innovation by other people on your behalf.

Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies

Šime posts regular content for web developers on webplatform.news.

In this week's news, Wikipedia helps identify three slow click handlers, Google Earth comes to the web, SVG properties in CSS get more support, and what to do in the event of zombie cookies.

Tracking down slow event handlers with Event Timing

Event Timing is experimentally available in Chrome (as an Origin Trial) and Wikipedia is taking part in the trial. This API can be used to accurately determine the duration of event handlers with the goal of surfacing slow events.

We quickly identified 3 very frequent slow click handlers experienced frequently by real users on Wikipedia. [...] Two of those issues are caused by expensive JavaScript calls causing style recalculation and layout.

(via Gilles Dubuc)

Google Earth for Web beta available

The preview version of Google Earth for Web (powered by WebAssembly) is now available. You can try it out in Chromium-based browsers and Firefox — it runs single-threaded in browsers that don’t yet have (re-)enabled SharedArrayBuffer — but not in Safari because of its lack of full support for WebGL2.

(via Jordon Mears)

SVG geometry properties in CSS

Firefox Nightly has implemented SVG geometry properties (x, y, r, etc.) in CSS. This feature is already supported in Chrome and Safari and is expected to ship in Firefox 69 in September.

See the Pen
Animating SVG geometry properties with CSS
by Šime Vidas (@simevidas)
on CodePen.

(via Jérémie Patonnier)

Browsers can keep session cookies alive

Chrome and Firefox allow users to restore the previous browser session on startup. With this option enabled, closing the browser will not delete the user’s session cookies, nor empty the sessionStorage of web pages.

Given this session resumption behavior, it’s more important than ever to ensure that your site behaves reasonably upon receipt of an outdated session cookie (e.g. redirect the user to the login page instead of showing an error).

(via Eric Lawrence)

The post Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies appeared first on CSS-Tricks.

Nownownow

Matthias Ott, relaying an idea he heard from Derek Sivers:

Many personal websites, including this one here, have an “about” page. It’s a page that tells you something about the background of a person or about the services provided. But what this page often doesn’t answer – and neither do Twitter or Facebook pages – is what this person really is up to at the moment. A page that answers questions like: What are you focused on at this point in your life? What have you just started working on that excites you like nothing else? Did you just move to a new town? Did you start a new career as a Jengascript wrangler? To answer all those questions, Derek suggests to create a “now page”. A page that tells visitors of your site “what you’d tell a friend you hadn’t seen in a year.”

Very cool idea! Derek has a directory page of people who have done this.

I have more scattered thoughts:

  • It's funny how social media sites aren't very helpful with this. You'd think looking at someone's social media profile would be the quickest and easiest way to catch up with what they are doing right now, but it just ain't. That's true for me, too. Random statements of what you're working on don't make very good social media posts. Maybe a pinned tweet could be like a "now" page, though.
  • I wonder if more homepages on people's personal sites should be this. As I browse around some of the sites, I like a lot of the "now" pages more than I like the homepage.
  • I went with a "what I want you to do" section on my personal site. It's a different vibe, but it almost doubles as a "now" page, as the things I want you to do are fairly related to the things I'm doing. Maybe the idea of a do page has some legs.

Direct Link to ArticlePermalink

The post Nownownow appeared first on CSS-Tricks.

How to Be a Good Open Source Community Member

The key to open source is playing nice.

A friend of mine, who is a very talented writer, recently became intrigued with open source and asked me to help her understand how to be a good open source community member.

Open source is one of the most unusual things in the world. Is there any other profession where highly skilled professionals donate their free time to give their work away for free? Many spend long hours at their day jobs, just to spend their nights and weekends doing the same thing.

How to Upgrade Angular Packages  and Enable the Ivy Compiler

Updating Your Packages and the Ivy Compiler

The following post focuses on the process of updating the packages used for an Angular project as well as activating the Ivy compiler. Packages can be updated in two ways:

Auto Upgrade

The first way is the easiest one, as it undertakes to do all the work for us with the Angular CLI. You may be able to update your project using the ng update command. Before proceeding with this process, we should install the latest version of Angular so we can be sure that we will update our existing project to the latest release. To do that, we can run the following commands in the Angular CLI.