Automattic Has Discontinued Active Development on Edit Flow Plugin

Edit Flow, the modular editorial plugin that enables collaboration inside the WordPress admin, is no longer being actively developed. After no updates for nine months, Mark Warbinek, a frustrated user, contacted Automattic to ask if they have abandoned the plugin or still plan to update it. A support representative from Automattic confirmed the company will no longer be updating Edit Flow:

At this time there is no active development of the Edit Flow Plugin.

That being the case – two things I can suggest are:

Submitting the issue to the Github repository for the plugin. This is used to track future development of the plugin and will be a canonical place for bugs or issues to be recorded.
https://github.com/Automattic/Edit-Flow

It is possible to ‘fork’ the plugin and make the changes needed – or use an alternative that has already been forked like PublishPress:
https://github.com/Automattic/Edit-Flow

Edit Flow is active on more than 10,000 WordPress sites and its sporadic development has caused users to question whether it was abandoned several times over the years. It is still listed among the WordPress.com VIP plugins, but will likely only be maintained for that platform going forward. A 10-month old PR was merged on its GitHub repository as recently as 19 days ago, after the contributor began to question whether the project was abandoned.

In 2016, Edit Flow went two years in between updates, leaving frustrated users in the dark. After that incident, a representative from Automattic said the company was working on an internal effort to improve the maintenance of their own plugins in order to avoid a situation like this happening again. The company currently has 88 plugins listed in the official directory.

PublishPress is the only alternative editorial plugin with comparable features, including an editorial calendar, notifications, editorial comments, custom statuses, and a content overview. It also offers seamless migration of Edit Flow data to PublishPress. A commercial version of the plugin includes additional features, such as a publishing checklist, reminders, permissions, a WooCommerce checklist, and more.

“I think I can speak for those users of this plugin that we are not happy with the horrible handling of this plugin, how Automattic has ignored and abandoned it, leaving users to suffer in the continuing fails this out-of-date plugin is causing,” Mark Warbinek said in response to to the reply from Automattic’s support team.

Unfortunately, this is always a risk when using free plugins from WordPress.org, especially ones without a direct business model supporting development. In many instances the plugin author’s first priority will be maintaining it for the paying customers. In this case that is WordPress.com VIP clients. Automattic has not posted an announcement on Edit Flow’s support forums, but an official communication would go a long way towards steering users in the right direction when they inevitably come looking for signs of life in the plugin.

Multi-Million Dollar HTML

Two stories:

  • Jason Grigsby finds Chipotle's online ordering form makes use of an input-masking technique that chops up a credit card expiration year making it invalid and thus denying the order. If pattern="\d\d" maxlength="2" was used instead (native browser feature), the browser is smart enough to do the right thing and not deny the order. Scratchpad math, based on published data, makes that worth $4.4 million dollars.
  • Adrian Roselli recalls an all-too-common form accessibility fail of missing a for/id attribute on labels and inputs results in an unusable experience and a scratchpad math loss of $18 million dollars to campaigns.

The label/input attribution thing really gets me. I feel like that's an extremely basic bit of HTML knowledge that benefits both accessibility and general UX. It's part of every HTML curriculum I've ever seen, and regularly pointed at as something you need to get right. And never a week goes by I don't find some production website not doing it.

We can do this everyone!

<label for="name">Name:</label>
<input id="name" name="name" type="text">

<!-- or -->

<label>
  Name:
  <input name="name" type="text">
</label>

The post Multi-Million Dollar HTML appeared first on CSS-Tricks.

Collective #553













C553_uiprint

Uiprint

Collection of printable wireframe, mockup and dot grid sketchpad templates.

Check it out




C553_devices

Devices

Updated images and Sketch files of popular devices from Facebook Design.

Check it out




C553_httptools

HTTP Toolkit

Intercept, debug and build with HTTP using this cross-platform HTTP(S) proxy, analyzer and client. Free for hobbyist.

Check it out




C553_dots

Dots

An amazing particles experiment by Yi-Wen Lin. Best viewed with a powerful CPU. You can find the code here.

Check it out

Collective #553 was written by Pedro Botelho and published on Codrops.

Preparing Themes For WordPress 5.3

Now that WordPress 5.3 Beta 1 is open for testing and with the official release slated for November 12, it’s time for theme authors to begin making sure their themes are ready for several changes.

Most work will revolve around the block editor. WordPress 5.3 will include versions 5.4 – 6.5 of the Gutenberg plugin, a total of 12 releases. This makes for a lot of ground to cover. The next release includes breaking changes.

For themes without custom block styles, little should change. However, theme authors who have been building custom block designs will likely have some work to do if they haven’t kept up with the changes in the Gutenberg plugin over the past several months.

Block Style Variations API Introduced

WordPress 5.3 introduces new server-side block style functions. This means that theme authors who prefer PHP can now register custom block style variations without writing JavaScript code.

The block styles feature allows theme authors to register custom styles for individual blocks. Then, they must apply custom CSS to these styles in the editor and the front end.

The new functions are basic one-to-one matches to their JavaScript counterparts. Block styles still need to be registered on a per-block basis. Support for registering single styles to multiple blocks at once hasn’t landed in core.

New Block HTML Creates Breaking Changes

Despite WordPress’ commitment to backward compatibility over the years, the Gutenberg team hasn’t maintained that approach with blocks. Block HTML output in the editor and the front end has changed for some blocks. These changes will break custom theme styles in many cases.

The following blocks have potential breaking changes for themes:

  • Group: A new inner container element was added to the markup.
  • Table: A wrapper element was added and the block class moved to the wrapper.
  • Gallery: Like the table block, it received the same wrapper element treatment. Galleries also support a caption for the entire gallery block.

In my tests, the gallery block had the most obvious breaking changes. Depending on how it is styled, users could be looking at a single column of images instead of their selected number. The core development blog has a complete overview of the HTML changes along with code examples for addressing issues.

It’d be interesting to see if the Gutenberg team makes similar HTML changes with other blocks in the future. Such changes make it tough for theme authors to maintain support between versions of WordPress and versions of the Gutenberg plugin. It also bloats CSS code when attempting to maintain compatibility. Adding an extra element doesn’t typically break things. However, moving an element’s class to another element is a dumpster fire waiting to happen. If these types of changes continue to happen, it could turn some theme authors away from supporting the block editor at a time when core needs to be encouraging more authors to design around it.

New Block Classes Added

Several new CSS classes are making their way into 5.3. For themes that remove core block styles on the front end, they need to add support for the classes to their theme’s stylesheet.

WordPress is doing away with inline styles for left, right, and center text alignment. This is a welcome change because it moves CSS to its appropriate place, which is in a stylesheet. Theme authors need to make sure they support these new classes for the following blocks.

  • Heading
  • Paragraph
  • Quote
  • Verse

The columns block no longer supports column-specific class names. Version 5.3 supports custom column widths, which are handled with inline styles. It’s unlikely this will break most themes, but it’s worth testing.

The separator block now supports custom colors. It is given both the text and background color class names on the front end. This allows theme authors to utilize the styling method they prefer. Ideally, a border color class would exist, but the block editor does not yet support selecting a custom border color.

Quick developer tip: if your theme uses a border color for the separator block, use currentColor to handle custom colors.

Data Science and the Growing Importance of Professional Certifications

Data scientists are among the most sought-after technical talent today. According to Glassdoor, Data Scientist has been the top role in the US for four consecutive years and is increasingly identified as an essential component for business growth. As the demand for data science talent increases and the importance of the role is understood, the need to formalize the profession arises. What exactly is the role of a data scientist and why is professional certification for data scientists growing in importance? 

You may also like: 10 Steps to Become a Data Scientist.

The Role of a Data Scientist

Data scientists work with enterprise leaders and key decision-makers to solve problems by preparing, analyzing, and understanding data to deliver insight, predict emerging trends, and provide recommendations to optimize results. The impact these professionals have varies by industry. For example, in healthcare, data scientists are using cognitive computing technologies to help support doctors to deliver personalized and precision medicine.  

StreamSets Transformer Extensibility: Spark and Machine Learning Part One

Apache Spark has been on the rise for the past few years, and it continues to dominate the landscape when it comes to in-memory and distributed computing, real-time analysis, and machine learning use cases. And with the recent release of StreamSets Transformer, a powerful tool for creating highly instrumented Apache Spark applications for modern ETL, you can quickly start leveraging all the benefits and power of Apache Spark with minimal operational and configuration overhead.

In this blog, you will learn how to extend StreamSets Transformer in order to train a Spark ML RandomForestRegressor model.

Secure Spring REST API Using OAuth2 + MySQL

Spring your security forward

Let’s secure our Spring REST API with OAuth2 and MySQL. We will store user credentials in the MySQL database, and client credentials will be stored in the in-memory database. Every client has its own unique client ID.

To secure our REST API, we will have to do the following things:

ASP.NET Core and Its Effectiveness in Building Web Applications

Is ASP.NET Core right for your development needs?

Earlier, businesses didn’t have multiple options to develop customized web applications with unique features using cutting-edge programming languages. To offer the optimal user experience, they had to invest more in infrastructure. Today, digitization has revolutionized the software industry. When it comes to building a web application, there are multiple technologies to pick from, and ASP.NET Core is prime amongst them.

ASP.NET core is an open-source web framework from Microsoft. Released in 2016, this web framework is one of the best alternatives to Windows-hosted ASP.NET applications.

Tom’s Tech Notes: AppSec and Threat Intelligence Visibility

It was great speaking with Surag Patel, Chief Strategy Officer at Contrast Security during PagerDuty Summit 2019 where we were on a DevSecOps panel together.

Contrast Security has joined the PagerDuty Integration Partnership Program to help identify and resolve cybersecurity threats and attacks for distributed DevSecOps teams. By embedding vulnerability analysis and exploit prevention directly into modern software, customers are able to achieve an improved security posture across their mission-critical functions.

Integrating SQL Server Tools Into SQL Change Automation Deployments

SQL Change

hen doing repetitive database work with SQL Change Automation (SCA) or SQL Compare, we often need to use other tools at the same time such as the registered servers in SQL Server Management Studio (SSMS), SQLCMD, and BCP. I also tend to use the SQL Server PowerShell module, sqlserver (formerly known as sqlps). This uses Server Management Objects (SMO), which is Nature's Way of interacting with SQL Server and uses the same .NET library that underlies SSMS.

You might also enjoy:  Simple Steps in SQL Change Automation Scripting

If you do so, you'll want to integrate all these tools as much as possible, and when you're scripting with PowerShell, use the same database connections as you are using with SCA. This article is all about how you do that. We'll show how you can start integrating SCA scripts with SSMS into a single process, and we'll also learn to stop fearing the connection string and view it as an ally. We will use one to create an SMO connection via a serverConnection object and borrow that same connection to execute BCP and execute a SQL Command.

Do People Empathize With Robots When They Walk in Their Shoes?

Walking in robots' shoes

As we engage more closely with robots and other automated technology, the ability to work effectively together is key. Central to this is likely to be the ability to understand one another, and a recent study from the University of Trento invokes the age-old maxim that true understanding comes when you walk a mile in someone’s shoes.

You may also like: Working Next to Robots

The research explored whether ‘beaming’ a human inside a robot, it affects the overall attitude of that person towards the robot. Interestingly, that is precisely what appeared to happen, with participants appearing to better identify with the robot they had been ‘beamed’ inside of.

How I Learned to Stop Worrying and Love Git Hooks

The merits of Git as a version control system are difficult to contest, but while Git will do a superb job in keeping track of the commits you and your teammates have made to a repository, it will not, in itself, guarantee the quality of those commits. Git will not stop you from committing code with linting errors in it, nor will it stop you from writing commit messages that convey no information whatsoever about the nature of the commits themselves, and it will, most certainly, not stop you from committing poorly formatted code.

Fortunately, with the help of Git hooks, we can rectify this state of affairs with only a few lines of code. In this tutorial, I will walk you through how to implement Git hooks that will only let you make a commit provided that it meets all the conditions that have been set for what constitutes an acceptable commit. If it does not meet one or more of those conditions, an error message will be shown that contains information about what needs to be done for the commit to pass the checks. In this way, we can keep the commit histories of our code bases neat and tidy, and in doing so make the lives of our teammates, and not to mention our future selves, a great deal easier and more pleasant.

As an added bonus, we will also see to it that code that passes all the tests is formatted before it gets committed. What is not to like about this proposition? Alright, let us get cracking.

Prerequisites

In order to be able to follow this tutorial, you should have a basic grasp of Node.js, npm and Git. If you have never heard of something called package.json and git commit -m [message] sounds like code for something super-duper secret, then I recommend that you pay this and this website a visit before you continue reading.

Our plan of action

First off, we are going to install the dependencies that make implementing pre-commit hooks a walk in the park. Once we have our toolbox, we are going to set up three checks that our commit will have to pass before it is made:

  • The code should be free from linting errors.
  • Any related unit tests should pass.
  • The commit message should adhere to a pre-determined format.

Then, if the commit passes all of the above checks, the code should be formatted before it is committed. An important thing to note is that these checks will only be run on files that have been staged for commit. This is a good thing, because if this were not the case, linting the whole code base and running all the unit tests would add quite an overhead time-wise.

In this tutorial, we will implement the checks discussed above for some front-end boilerplate that uses TypeScript and then Jest for the unit tests and Prettier for the code formatting. The procedure for implementing pre-commit hooks is the same regardless of the stack you are using, so by all means, do not feel compelled to jump on the TypeScript train just because I am riding it; and if you prefer Mocha to Jest, then do your unit tests with Mocha.

Installing the dependencies

First off, we are going to install Husky, which is the package that lets us do whatever checks we see fit before the commit is made. At the root of your project, run:

npm i husky --save-dev

However, as previously discussed, we only want to run the checks on files that have been staged for commit, and for this to be possible, we need to install another package, namely lint-staged:

npm i lint-staged --save-dev

Last, but not least, we are going to install commitlint, which will let us enforce a particular format for our commit messages. I have opted for one of their pre-packaged formats, namely the conventional one, since I think it encourages commit messages that are simple yet to the point. You can read more about it here.

npm install @commitlint/{config-conventional,cli} --save-dev

## If you are on a device that is running windows
npm install @commitlint/config-conventional @commitlint/cli --save-dev

After the commitlint packages have been installed, you need to create a config that tells commitlint to use the conventional format. You can do this from your terminal using the command below:

echo "module.exports = {extends: ['@commitlint/config-conventional']}" > commitlint.config.js

Great! Now we can move on to the fun part, which is to say implementing our checks!

Implementing our pre-commit hooks

Below is an overview of the scripts that I have in the package.json of my boilerplate project. We are going to run two of these scripts out of the box before a commit is made, namely the lint and prettier scripts. You are probably asking yourself why we will not run the test script as well, since we are going to implement a check that makes sure any related unit tests pass. The answer is that you have to be a little bit more specific with Jest if you do not want all unit tests to run when a commit is made.

"scripts": {
  "start": "webpack-dev-server --config ./webpack.dev.js --mode development",
  "build": "webpack --config ./webpack.prod.js --mode production",
  "test": "jest",
  "lint": "tsc --noEmit",
  "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write"
}

As you can tell from the code we added to the package.json file below, creating the pre-commit hooks for the lint and prettier scripts does not get more complicated than telling Husky that before a commit is made, lint-staged needs to be run. Then you tell lint-staged to run the lint and prettier scripts on all staged JavaScript and TypeScript files, and that is it!

"scripts": {
  "start": "webpack-dev-server --config ./webpack.dev.js --mode development",
  "build": "webpack --config ./webpack.prod.js --mode production",
  "test": "jest",
  "lint": "tsc --noEmit",
  "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write"
},
"husky": {
  "hooks": {
    "pre-commit": "lint-staged"
  }
},
"lint-staged": {
  "./**/*.{ts}": [
    "npm run lint",
    "npm run prettier"
  ]
}

At this point, if you set out to anger the TypeScript compiler by passing a string to a function that expects a number and then try to commit this code, our lint check will stop your commit in its tracks and tell you about the error and where to find it. This way, you can correct the error of your ways, and while I think that, in itself, is pretty powerful, we are not done yet!

By adding "jest --bail --coverage --findRelatedTests" to our configuration for lint-staged, we also make sure that the commit will not be made if any related unit tests do not pass. Coupled with the lint check, this is the code equivalent of wearing two safety harnesses while fixing broken tiles on your roof.

What about making sure that all commit messages adhere to the commitlint conventional format? Commit messages are not files, so we can not handle them with lint-staged, since lint-staged only works its magic on files staged for commit. Instead, we have to return to our configuration for Husky and add another hook, in which case our package.json will look like so:

"scripts": {
  "start": "webpack-dev-server --config ./webpack.dev.js --mode development",
  "build": "webpack --config ./webpack.prod.js --mode production",
  "test": "jest",
  "lint": "tsc --noEmit",
  "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write"
},
"husky": {
  "hooks": {
    "commit-msg": "commitlint -E HUSKY_GIT_PARAMS",  //Our new hook!
    "pre-commit": "lint-staged"
  }
},
"lint-staged": {
  "./**/*.{ts}": [
    "npm run lint",
    "jest --bail --coverage --findRelatedTests", 
    "npm run prettier"
  ]
}

If your commit message does not follow the commitlint conventional format, you will not be able to make your commit: so long, poorly formatted and obscure commit messages!

If you get your house in order and write some code that passes both the linting and unit test checks, and your commit message is properly formatted, lint-staged will run the Prettier script on the files staged for commit before the commit is made, which feels like the icing on the cake. At this point, I think we can feel pretty good about ourselves; a bit smug even.

Implementing pre-commit hooks is not more difficult than that, but the gains of doing so are tremendous. While I am always skeptical of adding yet another step to my workflow, using pre-commit hooks has saved me a world of bother, and I would never go back to making my commits in the dark, if I am allowed to end this tutorial on a somewhat pseudo-poetical note.

The post How I Learned to Stop Worrying and Love Git Hooks appeared first on CSS-Tricks.

Database Monitoring

Thanks to DevOps, databases are managed in a very different way, and both DBAs and developers need to monitor performance, security, backups, file size, and job outcomes.

Convert C++ code in C language

           HI everyone,

_SBLOCK allocateMemBlock(size_t size)
[
_SBLOCK
block = (_SBLOCK)sbrk(0);
void
memadr = (void)sbrk(0);
void
allocate_mem = (void)sbrk(BLOCK_SIZE + size);
if(allocate_mem == (void
)-1)
[
return NULL;
]
else
[
block->next = NULL;
block->isfree = false;
block->size = size;
block->memoryAddress = memadr+BLOCK_SIZE;
return block;
]