WordCamp Long Beach to Debut a “Future of WordPress” Track

The first-ever WordCamp Long Beach is happening October 5-6 at the Pointe Conference Center at Walter Pyramid (CSULB). Organizers are planning to host practical, skill-building talks and panels, abstract discussions, and networking events at locally-owned eateries. The event will be the only WordCamp happening in Los Angeles county this year.

Last week organizers opened the call for speakers and announced a new concept for the schedule. Saturday’s program will include two traditional tracks, one geared towards users and another towards professionals. Sunday will feature a “Future of WordPress” track with more philosophical/concept style presentations focused around the topic.

“This concept was inspired by the desire to have some ‘bigger’ conversations about WordPress, its place in the web/tech ecosystem, and where WordPress is headed,” co-organizer Sé Reed said. As a former WordPress Growth Council member, Reed has a special interest in facilitating discussions on these ideas.

“These topics come up occasionally, like with the WP Council/Advisory Board and the WP Governance Project, but they always seems to be relegated to a side conversation,” Reed said. “We need to be having these conversations openly and honestly, as a community. The future of WordPress is a big issue that affects everyone who works with WordPress.

“Since there doesn’t seem to be a place where these conversations are put front-and-center, I suggested we do it at our camp, which just so happens to be one month before WCUS.”

WordCamp Long Beach’s Call for Speakers post include a few sample topics to inspire potential applicants:

  • Internal Governance (WP Project)
  • External Governance (WP, WC3, GDPR, other acronyms)
  • Accessibility
  • The Future of WordPress
  • Future of the Web (technology, standards)
  • The WordPress Community
  • Backwards compatibility
  • WordPress’ impact on the open web
  • Third parties, browsers, operating systems, etc.

These are the types of big picture presentations that you rarely see at smaller WordCamps. They are usually sprinkled in with other topics at larger camps, so having an entire track dedicated to the Future of WordPress is a unique opportunity for attendees to join in these important conversations.

WordCamp Long Beach has space for a total of 250 attendees. Although it is the only camp happening in the county this year, the area has a strong group of local meetups throughout.

“We are lucky to have a really large number of active meetup groups spread through the county, so even though we are based in Long Beach, we are representing more than just our local meetup.”

Speaker applications are open to anyone, regardless of speaking experience. Each presentation should be 30-40 minutes in length, and applicants can also propose a workshop or panel. Applications will be open through August 23, 2019.

Anonymous Pen Save Option Removed

As of today, an account is required to save any content on CodePen.

In the past, it was possible to save a Pen without logging in. The Pen was saved “anonymously”, with no association to a CodePen account and no creator attribution.

Why?

Spammers, scammers, and others seeking to cause harm on the web abused the anonymous save feature. An overwhelming amount of anonymous saves included spam/scam content or otherwise went against our Code of Conduct and Terms of Service. Over the past year, the volume and severity of abusive content increased.

Additionally, anonymous save created problems for people who unintentionally saved Pens without logging in. Some people would lose track of work because it wasn’t associated with their account. Others posted things they didn’t mean to share publicly, not aware that they would not be able to edit the anonymous Pen later.

CodePen offers free accounts. People are better off saving Pens to accounts anyway so they can be found again and edited if need be. Plus more of the full feature set of CodePen will be available (like Full Page View) which wasn't available for anonymous Pens.

When?

Now. Since so many anonymous saves were abusive, we made the decision not to give advance notice of this change to avoid tipping off abusers.

I don’t want to use my real name on CodePen. Do I have to now?

No. We removed the option to save a Pen without a CodePen account, but you do not have to use your real name on CodePen. You are welcome to use CodePen under a pseudonym. We have more details on that in our Privacy Policy.

I saved some anonymous Pens. What will happen to them?

Though we don’t plan to delete every anonymous Pen ever created, we do frequently update our spam detection. Any existing anonymous Pens that gets flagged as spam in the future will be deleted automatically.

If you have an anonymous Pen that you would like to keep, you should fork or copy that Pen into your CodePen account.

I save anonymously to keep some Pens out of my profile. What should I do now?

We understand that use case and wish we could still support it, but the abuse made this option unsustainable.

We have lots of features to help you manage your CodePen content. Here are some options to consider:

  • Delete the Pen when it’s no longer needed. We even have an option to restore deleted Pens shortly after deleting if you change your mind.
  • Make the Pen private. You can share a private Pen with anyone by giving them a direct link to the Pen, but it won’t show in your profile and can’t be found in search.
  • Give the Pen a tag, like “temporary”, to make it easier to find and delete it later.
  • If you make a lot of temporary Pens, group them into a Collection to keep them together in one spot.
  • Pin a “scratchpad” Pen to your pinned items, and use it for throwaway work as needed.

The post Anonymous Pen Save Option Removed appeared first on CodePen Blog.

Let’s Give Grunt Tasks the Marie Kondo Organization Treatment

We live in an era of webpack and npm scripts. Good or bad, they took the lead for bundling and task running, along with bits of Rollup, JSPM and Gulp. But let's face it. Some of your older projects are still using good ol' Grunt. While it no longer glimmers as brightly, it does the job well so there's little reason to touch it.

Though, from time to time, you wonder if there’s a way to make those projects better, right? Then start from "Organizing Your Grunt Tasks" article and come back. I'll wait. That’ll set the stage for this post and then we'll take it further together to create a solid organization of Grunt tasks.

Automatic Speed Daemon task loading

It’s no fun writing loading declarations for each task, like this:

grunt.loadNpmTasks('grunt-contrib-clean')
grunt.loadNpmTasks('grunt-contrib-watch')
grunt.loadNpmTasks('grunt-csso')
grunt.loadNpmTasks('grunt-postcss')
grunt.loadNpmTasks('grunt-sass')
grunt.loadNpmTasks('grunt-uncss')

grunt.initConfig({})

It's common to use load-grunt-tasks to load all tasks automatically instead. But what if I tell you there is a faster way?

Try jit-grunt! Similar to load-grunt-tasks, but even faster than native grunt.loadNpmTasks.

The difference can be striking, especially in projects with large codebases.

Without jit-grunt

loading tasks     5.7s  ▇▇▇▇▇▇▇▇ 84%
assemble:compile  1.1s  ▇▇ 16%
Total 6.8s

With jit-grunt

loading tasks     111ms  ▇ 8%
loading assemble  221ms  ▇▇ 16%
assemble:compile   1.1s  ▇▇▇▇▇▇▇▇ 77%
Total 1.4s

1.4 seconds doesn't really make it a Speed Daemon... so I kinda lied. But still, it's six times faster than the traditional way! If you're curious how that's possible, read about the original issue which led to the creation of jit-grunt.

How is jit-grunt used? First, install:

npm install jit-grunt --save

Then replace all tasks load statements with a single line:

module.exports = function (grunt) {
  // Intead of this:
  // grunt.loadNpmTasks('grunt-contrib-clean')
  // grunt.loadNpmTasks('grunt-contrib-watch')
  // grunt.loadNpmTasks('grunt-csso')
  // grunt.loadNpmTasks('grunt-postcss')
  // grunt.loadNpmTasks('grunt-sass')
  // grunt.loadNpmTasks('grunt-uncss')

  // Or instead of this, if you've used `load-grunt-tasks`
  // require('load-grunt-tasks')(grunt, {
  //   scope: ['devDependencies', 'dependencies'] 
  // })

  // Use this:
  require('jit-grunt')(grunt)

  grunt.initConfig({})
}

Done!

Better configs loading

In the last example, we told Grunt how to load tasks itself, but we didn't quite finish the job. As “Organizing Your Grunt Tasks" suggests, one of the most useful things we're trying to do here is split up a monolithic Gruntfile into smaller standalone files.

If you read the mentioned article, you'll know it's better to move all task configuration into external files. So, instead of a large single gruntfile.js file:

module.exports = function (grunt) {
  require('jit-grunt')(grunt)

  grunt.initConfig({
    clean: {/* task configuration goes here */},
    watch: {/* task configuration goes here */},
    csso: {/* task configuration goes here */},
    postcss: {/* task configuration goes here */},
    sass: {/* task configuration goes here */},
    uncss: {/* task configuration goes here */}
  })
}

We want this:

tasks
  ├─ postcss.js
  ├─ concat.js
  ├─ cssmin.js
  ├─ jshint.js
  ├─ jsvalidate.js
  ├─ uglify.js
  ├─ watch.js
  └─ sass.js
gruntfile.js

But that will force us to load each external configuration into gruntfile.js manually, and that takes time! We need a way to load our configuration files automatically.

We’ll use load-grunt-configs for that purpose. It takes a path, grabs all of the configuration files there and gives us a merged config object which we use for Grunt config initialization.

Here how it works:

module.exports = function (grunt) {
  require('jit-grunt')(grunt)

  const configs = require('load-grunt-configs')(grunt, {
    config: { src: 'tasks/.js' }
  })

  grunt.initConfig(configs)
  grunt.registerTask('default', ['cssmin'])
}

Grunt can do the same thing natively! Take a look at grunt.task.loadTasks (or it's alias grunt.loadTasks).

Use it like this:

module.exports = function (grunt) {
  require('jit-grunt')(grunt)

  grunt.initConfig({})

  // Load all your external configs.
  // It's important to use it _after_ Grunt config has been initialized,
  // otherwise it will have nothing to work with.
  grunt.loadTasks('tasks')

  grunt.registerTask('default', ['cssmin'])
}

Grunt will automatically load all js or coffee config files from the specified directory. Nice and clean! But, if you'll try to use it, you'll notice it does nothing. How is that? We still need to do one more thing.

Let's look into our gruntfile.js code once again, this time without the comments:

module.exports = function (grunt) {
  require('jit-grunt')(grunt)

  grunt.initConfig({})

  grunt.loadTasks('tasks')

  grunt.registerTask('default', ['cssmin'])
}

Notice that grunt.loadTasks loads files from tasks directory, but never assigns it to our actual Grunt config.

Compare it with a way load-grunt-configs works:

module.exports = function (grunt) {
  require('jit-grunt')(grunt)

  // 1. Load configs
  const configs = require('load-grunt-configs')(grunt, {
    config: { src: 'tasks/.js' }
  })

  // 2. Assign configs
  grunt.initConfig(configs)

  grunt.registerTask('default', ['cssmin'])
}

We initialize our Grunt config before actually loadings tasks configuration. If you are getting a strong feeling that it will make us end up with empty Grunt config — you're totally right. You see, on contrary to the load-grunt-configs, grunt.loadTasks just imports files into gruntfile.js. It does nothing more.

Woah! So, how do we make use of it? Let's explore!

First, create a file inside directory tasks named test.js

module.exports = function () {
  console.log("Hi! I'm an external task and I'm taking precious space in your console!")
}

Let's run Grunt now:

$ grunt

We'll see printed to the console:

> Hi! I'm an external task and I'm taking precious space in your console!

So, upon importing grunt.loadTasks, every function is executed as it loads files. That's nice, but what's the use of it for us? We still can't do a thing we actually want — to configure our tasks.

Hold my beer because there is a way to command Grunt from within external configuration files! Using grunt.loadTasks upon importing provides current Grunt instance as a function first argument and also binds it to this.

So, we can update our Gruntfile:

module.exports = function (grunt) {
  require('jit-grunt')(grunt)

  grunt.initConfig({
    // Add some value to work with
    testingValue: 123
  })

  grunt.loadTasks('tasks')

  grunt.registerTask('default', ['cssmin'])
}

...and change the external config file tasks/test.js:

// Add `grunt` as first function argument
module.exports = function (grunt) {
  // Now, use Grunt methods on `grunt` instance
  grunt.log.error('I am a Grunt error!')

  // Or use them on `this` which does the same
  this.log.error('I am a Grunt error too, from the same instance, but from `this`!')

  const config = grunt.config.get()

  grunt.log.ok('And here goes current config:')
  grunt.log.ok(config)
}

Now, let’s run Grunt again:

$ grunt

And what we'll get:

> I am Grunt error!
> I am Grunt error too, from the same instance, but from `this`!
> And here goes current config:
> {
    testingValue: 123
  }

See how we accessed native Grunt methods from an external file and were even able to retrieve the current Grunt config? Are you thinking about that too? Yeah, the full power of Grunt is already there, right at our fingertips in each file!

If you are wondering why methods inside external files can affect our main Grunt instance, it is because of a referencing. grunt.loadTasks passing this and grunt to our current Grunt instance — not a copy of it. By invoking methods on that reference, we're able to read and mutate our main Grunt configuration file.

Now, we need to actually configure something! One last thing...

This time, let’s make configuration loading work for real

Alright, we’ve come a long way. Our tasks are loaded automatically and faster. We learned how to load external configs with native Grunt method. But our task configs are still not quite there because they do not end up in Grunt config.

But we’re almost there! We learned that we can use any Grunt instance methods in imported files using grunt.loadTasks. They are available on grunt and this instances.

Among many other methods, there is a precious grunt.config method. It allows us to set a value in an existing Grunt config. The main one, which we initialized in our Gruntfile... remember that one?

What's important is the way we can define tasks configurations. Exactly what we need!

// tasks/test.js

module.exports = function (grunt) {
  grunt.config('csso', {
    build: {
      files: { 'style.css': 'styles.css' }
    }
  })

  // same as
  // this.config('csso', {
  //   build: {
  //     files: { 'style.css': 'styles.css' }
  //   }
  // })
}

Now let's update Gruntfile to log the current config. We need to see what we did, after all:

module.exports = function (grunt) {
  require('jit-grunt')(grunt)

  grunt.initConfig({
    testingValue: 123
  })

  grunt.loadTasks('tasks')

  // Log our current config
  console.log(grunt.config())

  grunt.registerTask('default', ['cssmin'])
}

Run Grunt:

$ grunt

...and here’s what we see:

> {
    testingValue: 123,
    csso: {
      build: {
        files: {
          'style.css': 'styles.css'
        }
      }
    }
  }

grunt.config sets csso value when imported, so the CSSO task is now configured and ready to run when Grunt is invoked. Perfect.

Note that if you used load-grunt-configs previously, you had a code like that, where each file exports a configuration object:

// tasks/grunt-csso.js

module.exports = {
  target: {
    files: { 'style.css': 'styles.css' }
  }
}

That needs to be changed to a function, as described above:

// tasks/grunt-csso.js

module.exports = function (grunt) {
  grunt.config('csso', {
    build: {
      files: { 'style.css': 'styles.css' }
    }
  })
}

OK, one more one more last thing... this time for real!

Taking external config files to the next level

We learned a lot. Load tasks, load external configuration files, define a configuration with Grunt methods... that's fine, but where's the profit?

Hold my beer again!

By this time, we’ve externalized all our task configuration files. So, the our project directory looks something like this:

tasks
  ├─ grunt-browser-sync.js  
  ├─ grunt-cache-bust.js
  ├─ grunt-contrib-clean.js 
  ├─ grunt-contrib-copy.js  
  ├─ grunt-contrib-htmlmin.js   
  ├─ grunt-contrib-uglify.js
  ├─ grunt-contrib-watch.js 
  ├─ grunt-csso.js  
  ├─ grunt-nunjucks-2-html.js   
  ├─ grunt-postcss.js   
  ├─ grunt-processhtml.js
  ├─ grunt-responsive-image.js  
  ├─ grunt-sass.js  
  ├─ grunt-shell.js 
  ├─ grunt-sitemap-xml.js   
  ├─ grunt-size-report.js   
  ├─ grunt-spritesmith-map.mustache 
  ├─ grunt-spritesmith.js   
  ├─ grunt-standard.js  
  ├─ grunt-stylelint.js 
  ├─ grunt-tinypng.js   
  ├─ grunt-uncss.js 
  └─ grunt-webfont.js
gruntfile.js

That keeps Gruntfile relatively small and things seem to be well organized. But do you get a clear picture of the project just by glancing into this cold and lifeless list of tasks? What actually do they do? What's the flow?

Can you tell that Sass files are going through grunt-sass, then grunt-postcss:autoprefixer, then grunt-uncss, and finally through grunt-csso? Is it obvious that the clean task is cleaning the CSS or that grunt-spritesmith is generating a Sass file which should be picked up too, as grunt-watch watches over changes?

Seems like things are all over the place. We may have gone too far with externalization!

So, finally... now what if tell you that there’s yet a better way would be group configs... based on features? Instead of a not-so-helpful list of tasks, we'll get a sensible list of features. How about that?

tasks
  ├─ data.js 
  ├─ fonts.js 
  ├─ icons.js 
  ├─ images.js 
  ├─ misc.js 
  ├─ scripts.js 
  ├─ sprites.js 
  ├─ styles.js 
  └─ templates.js
gruntfile.js

That tells me a story! But how could we do that?

We already learned about grunt.config. And believe it or not, you can use it multiple times in a single external file to configure multiple tasks at once! Let’s see how it works:

// tasks/styles.js

module.exports = function (grunt) {
  // Configuring Sass task
  grunt.config('sass', {
    build: {/* options */}
  })
  
  // Configuring PostCSS task
  grunt.config('postcss', {
    autoprefix: {/* options */}
  })
}

One file, multiple configurations. Quite flexible! But there is an issue we missed.

How should we deal with tasks such as grunt-contrib-watch? Its configuration is a whole monolithic thing with definitions for each task that are unable to be split.

// tasks/grunt-contrib-watch.js

module.exports = function (grunt) {
  grunt.config('watch', {
    sprites: {/* options */},
    styles: {/* options */},
    templates: {/* options */}
  })
}

We can't simply use grunt.config to set watch configuration in each file, as it will override the same watch configuration in already imported files. And leaving it in a standalone file sounds like a bad option too — after all, we want to keep all related things close.

Fret not! grunt.config.merge to the rescue!

While grunt.config explicitly sets and overrides any existing values in Grunt config, grunt.config.merge recursively merges values with existing values in other Grunt config files giving us a single Grunt config. A simple, but effective way to keep related things together.

An example:

// tasks/styles.js

module.exports = function (grunt) {
  grunt.config.merge({
    watch: {
      templates: {/* options */}
    }
  })
}
// tasks/templates.js

module.exports = function (grunt) {
  grunt.config.merge({
    watch: {
      styles: {/* options */}
    }
  })
}

This will produce a single Grunt config:

{
  watch: {
    styles: {/* options */},
    templates: {/* options */}
  }
}

Just what we needed! Let's apply this to the real issue — our styles-related configuration files. Replace our three external task files:

tasks
  ├─ grunt-sass.js
  ├─ grunt-postcss.js   
  └─ grunt-contrib-watch.js

...with a single tasks/styles.js file that combines them all:

module.exports = function (grunt) {
  grunt.config('sass', {
    build: {
      files: [
        {
          expand: true,
          cwd: 'source/styles',
          src: '{,**/}*.scss',
          dest: 'build/assets/styles',
          ext: '.compiled.css'
        }
      ]
    }
  })

  grunt.config('postcss', {
    autoprefix: {
      files: [
        {
          expand: true,
          cwd: 'build/assets/styles',
          src: '{,**/}*.compiled.css',
          dest: 'build/assets/styles',
          ext: '.prefixed.css'
        }
      ]
    }
  })

  // Note that we need to use `grunt.config.merge` here!
  grunt.config.merge({
    watch: {
      styles: {
        files: ['source/styles/{,**/}*.scss'],
        tasks: ['sass', 'postcss:autoprefix']
      }
    }
  })
}

Now it's much easier to tell just by glancing into tasks/styles.js that styles have three related tasks. I'm sure you can imagine extending this concept to other grouped tasks, like all the things you might want to do with scripts, images, or anything else. That gives us a reasonable configuration organization. Finding things will be much easier, trust me.

And that's it! The whole point of what we learned.

That’s a wrap

Grunt is no longer the new darling it once was when it first hit the scene. But to date, it is a straightforward and reliable tool that does its job well. With proper handling, it gives even fewer reasons to swap it for something newer.

Let's recap what we can do to organize our tasks efficiently:

  1. Load tasks using jit-grunt instead of load-grunt-tasks. It's same but insanely faster.
  2. Move specific task configurations out from Gruntfile and into external config files to keep things organized.
  3. Use native grunt.task.loadTasks to load external config files. It's simple but powerful as it exposes all Grunt capabilities.
  4. Finally, think about a better way to organize your config files! Group them by feature or domain instead of the task itself. Use grunt.config.merge to split complex tasks like watch.

And, for sure, check Grunt documentation. It’s still worth a read after all these years.

If you’d like to see a real-world example, check out Kotsu, a Grunt-based starter kit and static website generator. You'll find even more tricks in there.

Got better ideas about how to organize Grunt configs even better? Please share them in the comments!

The post Let’s Give Grunt Tasks the Marie Kondo Organization Treatment appeared first on CSS-Tricks.

Twilio Announces Conversations API for Multi-Channel Messaging

Twilio has announced Twilio Conversations, a unified API that allows developers to build conversation experiences across multiple channels and platforms. Modern messaging platforms exist across multiple channels (e.g. SMS, MMS, Chat, and WhatsApp). The Twilio Conversations API allows developers to take input from all of these disparate channels and create a single experience for users to interact with.

O’Reilly Partners with Netlify to Publish Free E-Book: Modern Web Development on the JAMstack

If you are following the JAMstack (JavaScript, APIs, and markup) craze and want to learn more about the history and best practices of the architecture, O’Reilly has published a short book called Modern Web Development on the JAMstack that is now available as a free download. Netlify CEO Mathias Biilmann, who coined the term “JAMstack” and pioneered hosting for it, co-authored the book with Phil Hawksworth, Netlify’s principal developer advocate, with contributions from other engineers at the company.

In the introduction, they describe the JAMstack movement as a rare shift in the tech landscape that “delivers a productivity boost for developers and a large performance boost for users.” They also see it as a more efficient way of building a secure and stable websites that will advance the open web.

We’ve seen firsthand how the JAMstack improves the experience for both users and developers. Most importantly, we’ve seen how increases in site speed, site reliability, and developer productivity can contribute to the continued health and viability of the open web.

The book is an important read, not only for those exploring JAMstack architecture but also for getting an outside perspective on the kinds of problems that the WordPress ecosystem needs to solve. The authors describe WordPress and other CMS’s as monolithic apps, referencing security and performance concerns. The introduction summarizes many of the problems that professionals are routinely paid to solve when managing and scaling WordPress websites:

For nearly three decades, the developer community has explored ways to make the web easier and faster to develop, more capable, more performant, and more secure. At times, though, the effort has seemed to trade one goal for another. WordPress, for example, became a revolution in making content easier to author—but anyone who’s scaled a high-traffic WordPress site knows it also brings a whole set of new challenges in performance and security. Trading the simplicity of HTML files for database-powered content means facing the very real threats that sites might crash as they become popular or are hacked when nobody is watching closely.

And dynamically transforming content into HTML—each and every time it’s requested—takes quite a few compute cycles. To mitigate all the overhead, many web stacks have introduced intricate and clever caching schemes at almost every level, from the database on up. But these complex setups have often made the development process feel cumbersome and fragile. It can be difficult to get any work done on a site when you can’t get it running and testable on your own laptop. (Trust us, we know.)

Biilmann and his co-authors have kept to the more general concepts and technical details of how JAMstack architecture differs from other, more traditional stacks. JAMstack does not prescribe any specific frameworks or tools but is rather a diverse and growing ecosystem. The authors see it as “a movement, a community collection of best practices and workflows that result in high-speed websites that are a pleasure to work on.”

The book covers topics like the benefits of atomic deployments, end-to-end version control, choosing a site generator, and the variety of automation and tooling available. It suggests a few ways of handling some of the more challenging additions to static sites, such as forms, search, notifications, and identity.

Modern Web Development on the JAMstack concludes with a case study on how Smashing Magazine moved its publication from a WordPress site with thousands of articles, 200,000+ comments, and an attached Shopify store, to a new JAMstack setup. The detailed breakdown of the migration provides an interesting look at one solution to the challenges of publishing at scale. These are the kinds of architectural concerns that the WordPress ecosystem needs to continue to address and simplify for the next generation of developers.

The 127-page PDF is available for free and an EPUB version is expected sometime this week.

3 Ways Healthcare Apps Make Use of Machine Learning

The healthcare industry has generated plenty of data. The new method of data collection, such as sensor-generated data, has helped this industry to find a spot in the top.

What if this data can be used to provide better healthcare services at lower costs and increase patient satisfaction? Yes, you heard it right. It’s actually possible by applying machine learning (ML) techniques in the healthcare industry.

Announcing OmniSci.jl: A Julia Client for OmniSci

Today, I’m pleased to announce a new way to work with the OmniSci platform: OmniSci.jl, a Julia client for OmniSci! This Apache Thrift-based client is the result of a passion project I started when I arrived at OmniSci in March 2018 to complement our other open-source libraries for accessing data: pymapd, mapd-connector, and JDBC.

Julia and OmniSci: Similar in Spirit and Outcomes

If you’re not familiar with the Julia programming language, the language is a dynamically-typed, just-in-time compiled language built on LLVM that can achieve or beat the performance of high-performance, compiled languages such as C/C++ and FORTRAN. With the performance of C++ and the convenience of writing Python, Julia quickly became my favorite programming language when I started using it around 2013.

SSO — WSO2 API Manager and Keycloak

In this article, I am going to show how to implement Single Sign-On (SSO) for WSO2 API Manager using Keycloak as a Federated Identity Provider. Also, I will go for a deep-dive showing how to debug the WSO2 API Manager code to check what happens inside when it's configured with a third-party identity provider (i.e Keycloak in this example).

High-Level Architecture

This is what we are going to do in this tutorial.

AppSec Key Elements

To understand the current and future state of application security, we obtained insights from five IT executives. We asked them, “What are the most important elements of application security?” Here’s what they told us:

  • Visibility is crucial; if you can’t see what’s going on, you don’t know where to act. This is why our perspective inside the application is so crucial.
  • Have empathy for the developer. 80% of our companies are developers. Remember what developers have to do — make something that’s relevant, useful, popular, with features, scalable, performant, and secure. Start by understanding that developers have a lot on their plate, and think about how to make their lives as easy as possible. Take the AppSec concern. Ensure that it's consumable and actionable by a developer. If you can form the issue as a bug with direction on how to fix the bug, if you create a form like a JIRA ticket, then you’ve gone as far as a security leader to find issues and make it actionable to fix quickly.
  • Application Security improves as you look at your application deliverable from various perspectives. It’s important to shift left so you get feedback on security vulnerabilities as a developer is coding and includes dependencies into their project — this can be done through IDE plugins (like Nexus Lifecycle).

    At the same time, shifting left doesn’t remove the need for centralized static application security testing (SAST) and dynamic application security testing (DAST), since these techniques can bring different violations to light. Monitoring for attacks in production is a very useful technique as well, as on average, companies take about 197 days to identify and 69 days to contain a breach according to IBM, which clearly shows us that there’s significant room for improvement. New, innovative security solutions even allow you to install agents into the runtime of applications running in production, which monitor critical segments in code and put them in a walled garden, so that even if a malicious user manages to trigger an exploit, they’ll be cut off instantly. The most important element is to not ignore application security completely and to use a multi-perspectival approach since each perspective yields subtly different insights.
  • We have found a holistic security mindset is crucial in every aspect of an application’s development and operation. Continually testing, scanning, and verifying applications is the best way to ensure their secure operation.

    Two are tasks: secure the application lifecycle and secure the application operation. The first part is injecting application security in all phases of the lifecycle. We need to test applications at programming and build phases when collecting elements, at deploy, throughout production, and through the decommissioning of the application. It’s all about detecting vulnerabilities. Once it’s up and running, it's less protected. RASP is a technology I defined in 2012. It’s being adopted very slowly, as it requires instrumentation in a runtime environment.

Here’s who shared their insights:

What You Should Know About the PCI Software Security Framework in 2019

The Payment Card Industry Security Standards Council (PCI SSC) recently announced the new PCI Software Security Framework. The new set of standards aims to improve the security resiliency of applications that accept payments and use payment data in their ecosystems. Learn everything you need to know about the PCI Software Security Framework in this article.

What Is the PCI Software Security Framework?

The framework is a new set of standards for securing payment data against data breaches and fraud. There are standards for the secure design, development, and maintenance of modern payment solutions. The standard applies to payment software that is sold, distributed, or licensed to third parties for the purposes of supporting or facilitating payment transactions.

Four Most Used REST API Authentication Methods

While there are as many proprietary authentication methods as there are systems that utilize them, they are largely variations of a few major approaches. In this post, I will go over the four most used in the REST APIs and microservices world.

Authentication vs. Authorization

Before I dive into this, let's define what authentication actually is, and more importantly, what it’s not. As much as authentication drives the modern internet, the topic is often conflated with a closely related term: authorization.

Developing WordPress Sites With Docker

I recently set up a new WordPress-based website and local Docker-based development environment. This post documents what I did so that I can do it again next time! As I'm not in the WordPress world, many things are strange to me and I'm indebted to Jenny Wong for pointing me in the right direction on numerous occasions and being very patient with my questions! Thanks Jenny!

Project Organization

There's always ancillary files and directories in a project that aren't part of the actual website, so I have put the WordPress site in a subdirectory called app and then I have room for other stuff. My project's root directory looks like this:

Five Best Practices for GoLang CI/CD

±For developers programming in long-established languages like Java, JavaScript, or Python, the best way to build continuous integration and delivery (CI/CD) workflows with Artifactory is pretty familiar. A mature set of dependency management systems for those languages and container solutions, such as Docker, provide a clear roadmap.

But if you’re programming your applications in GoLang, how hard is it to practice CI/CD with the same kind of efficiency?

Running Decision Trees in Neo4j

Editor’s Note: This presentation was given by Max De Marzi at GraphConnect 2018 in New York City.

Presentation Summary

In this presentation, Max De Marzi shares how decision trees are used to make near-real-time decisions using a graph database. In this case, he uses the unorthodox example of nightclub entrance criteria.

Network Graphs

A network graph is a chart that displays relationships between elements (nodes) using simple links. Network graphs allows us to visualize clusters and relationships between the nodes quickly. These graphs are often used in industries such as life science, cybersecurity, intelligence, etc.

Creating a network graph is straightforward. This demo shows five nodes and the relationship between them. Node one has a relationship with nodes three, four, and two. Node five also has a relationship with nodes two and four, but it does not have a relationship with node three.