Introducing Reseller – Automate Your Digital Agency

Today we are officially releasing our new Reseller platform, the end-to-end way to sell hosting, domains, templates, support and services through your own white-label portal, on your own domain.

Exclusively available for WPMU DEV Agency members, we’ve built this platform from the ground up to make selling-sites-while-you-sleep a reality.

Read on to see how you can use Reseller to create your own Squarespace, GoDaddy, or to just automate your web development business.

Manual no more

Our members have always resold WPMU DEV plugins, hosting and services and we’ve always done whatever we can to make that happen by making it easy to white-label everything we do.

But that’s always been a manual process, your customers have had to contact you, you’ve set stuff up and emailed them back, and so on.

Well, that ends today, because with Reseller you can offer your site visitors a hosting plan, a choice of templates and any services you care to add on, and they can purchase it and create a subscription, without you having to lift a finger.

They’ll be able to manage their site and their subscription using your professional custom portal, get support, and purchase other services as you define them.

And shortly, you’ll be able to package domains with that too.

Reseller tools

To get started, just visit the new Reseller icon in The Hub and follow the instructions, as part of this you’ll set up:

White-label billing, powered by Stripe

A screen showing our clients and billing interface
Easily manage your clients, invoices, and recurring revenue from one place.

Our client billing platform has already processed over $5 million in subscriptions and invoices for our members, and it powers Reseller, allowing you to easily let your customers set up subscription packages and make one-off payments.

Even better, we charge 0% commission, so you get to keep the entire fee.

White-label client portal, at your own domain

A screen showing a branded portal login
Create a branded portal where clients can check out, log in, and manage their sites.

The Hub Client is already used by thousands of WPMU DEV members to provide a professional portal for their clients, and now you can also use it to create elegant product and pricing tables that allow your users to check out directly to their new site.

And you can control exactly what they see there.

Hosting and template packages

An example of some newly created hosting reseller products and plans
Add your products, create bundled plans, and set your pricing/subscription terms.

It’s easy with Reseller to create as many packages as you like, combining templates (use ours or create your own) and whatever hosting packages you choose. Then, simply add prices and advertise them on your site.

We’ll only ever charge you, at our normal rates, for the hosting your customers buy.

Coming soon…

Reseller is ready to go now. In fact, it’s already being used successfully by a number of members, but there are some core integrations coming soon that we think you’ll enjoy.

Domain automation

A screen showing our wholesale domains which will soon be part of the automated reseller process
Full domain reseller automation is on the way next!

Allow your customers to search for and purchase a domain with their package, this will automatically map the domain to their site and set up a subscription for them too. Domain transfers will also work just fine.

In the interim, you can encourage customers to bring their own domains or let them know you’ll arrange that shortly (and you can use the billing platform for that).

White-label support

A screen showing what white label support for your clients might look like
Brand our highly-rated 24/7 expert support as your own!

Are you concerned about not being able to support your clients 24/7? We know exactly how you feel and that’s why we’re going to offer white-label support to help you run your round-the-clock business, with a platform that integrates your support with ours.

Of course, you can already provide your own integrated support right now as part of The Hub Client.

Let us know how you go

We’re really looking forward to seeing how you use Reseller and developing the platform to help you succeed, so please let us know through the feedback tab how you go, and what we can do to help you succeed.

And, of course, we’re interested in your impressions here too :) So feel free to leave a comment below!

Unleashing the Power of On-Premise MFA_ Elevate Active Directory Security

In today's digital age, the backbone of any organization's IT infrastructure is its Active Directory (AD). This centralized directory service manages authentication and authorization, making it critical for safeguarding sensitive data and maintaining system integrity.

However, as the technological landscape evolves, so do the methods employed by cybercriminals to breach security measures. This is where Multi-Factor Authentication (MFA) steps in, presenting itself as a formidable defense against unauthorized access and data breaches.

Additional Flow Instances in IBM App Connect Enterprise

It is a common mistake to simply add more resources to a system to get more performance. For example, one might enlarge the thread pools and database connection pools, assuming more threads can do more work and therefore process more messages. This can be true up to a point, but beyond that it may result in no performance increase, or even significantly reduced performance. This article attempts to describe a situation where these two things can happen.

Introduction to Flow Instances

IBM App Connect Enterprise (ACE) via the toolkit allows the developer to create message flows that contain nodes that process the message step by step, along with logical constructs. The flow represents code, and a flow instance runs this code. A flow instance is single threaded, so to gain multiple threads, the flow can be configured with additional instances. These are an addition to the default one, so to configure ten simultaneous execution threads for a flow, additional instances should be set to 9.

Hardcoded Secret at the Heart of the Dell Compellent VMware Vulnerability

In August, Dell disclosed vulnerability CVE-2023-39250 where "A local low-privileged malicious user could potentially exploit this vulnerability to retrieve an encryption key that could aid in further attacks." This actively affects Dell Storage Integration Tools for VMware (DSITV) customers. Learn how to protect yourself from this vulnerability and some tips on preventing similar mishaps in your codebases.

How Do I Mitigate This as a Dell Compellent Customer?

Before diving into what happened, if you think you might be affected, we encourage you to start the investigation and mitigation process as soon as possible. According to the report released by Dell, all users of DSITV should follow these workaround and mitigation steps:

Training a Handwritten Digits Classifier in Pytorch With Apache Cassandra Database

Handwritten digit recognition is one of the classic tasks undertaken by students when learning the basics of Neural Networks and Computer Vision. The basic idea is to take a number of labeled images of handwritten digits and use those to train a neural network that is able to classify new unlabeled images. For this demo, we show how to use data stored in a large-scale database as our training data. We also explain how to use that same database as a basic model registry. This addition can enable model serving as well as potentially future retraining.

Introduction

MNIST is a set of datasets that share a particular format useful for educating students about neural networks while presenting them with diverse problems. The MNIST datasets for this demo are a collection of 28 by 28-pixel grayscale images as data and classifications 0-9 as potential labels. This demo works with the original MNIST handwritten digits dataset as well as the MNIST fashion dataset. 

Enhancing Language Models: Choosing Between RAG and Fine-Tuning

In my recent journey of developing various AI solutions powered by Language Models (LLMs), a significant question has emerged: Should we harness the capabilities of Retrieval Augmented Generation (RAG), or should we opt for the path of custom fine-tuning? This decision can profoundly impact the performance and adaptability of our AI systems. Let's delve into the considerations that can guide you in making this pivotal choice.

1. Access to External Data

Fine-Tuning: Relying on Existing Knowledge
Fine-tuning predominantly relies on existing knowledge within the model. It's ideal when your AI system can operate effectively with the data it has been initially trained on. However, it's worth noting that for fine-tuning to incorporate external data effectively, you would need a constantly updated dataset, which can be challenging to maintain, especially for rapidly changing information.

RAG: The Power of External Knowledge
One of the standout advantages of RAG lies in its ability to seamlessly tap into external sources like databases and documents. It's akin to giving your LLM the ability to 'look up' relevant information, enriching its responses with real-world data. When your application demands real-time access to external information, RAG is the go-to choice.

2. Modifying Model Behavior and Knowledge

Fine-Tuning: Tailoring to Your Domain
If you aim to infuse your AI with specific linguistic styles or industry-specific jargon, fine-tuning is your best ally. It allows you to mold the LLM to match your domain's unique tone and expertise. Fine-tuning lets you achieve a high degree of precision in tailoring the model's output to your exact requirements.

RAG: Prioritizing Relevance
On the flip side, RAG excels at retrieving information from external sources and prioritizing relevance. However, it may not always capture the specialized nuances or linguistic idiosyncrasies you desire. It's more about delivering contextually relevant information.

3. Availability of Training Data

Fine-Tuning: Data-Hungry Approach
Fine-tuning thrives on high-quality, domain-specific datasets. To achieve significant improvements in model performance, you'll need ample data with rich details and nuances. If your dataset is limited, fine-tuning might not yield the desired results.

RAG: Less Dependent on Domain-Specific Data
RAG is more forgiving when it comes to the quantity of domain-specific training data. Its real strength lies in its ability to fetch insights from external sources, making it a valuable choice even with limited data.

RAG_Image.png

4. Handling Hallucinations

Fine-Tuning: Addressing Fabrications
While fine-tuning can be directed to reduce such fabrications, it's not a guaranteed solution. It might require additional training and data filtering to minimize these inaccuracies effectively.

RAG: Fact-Driven Reliability
Language models can sometimes generate information that isn't entirely factual, often referred to as "hallucinations." RAG systems, with their retrieval-before-response design, are less prone to this issue. They rely on external data, making them a reliable choice for fact-driven applications where accuracy is paramount.

6. Transparency Needs

Fine-Tuning: The Black Box Conundrum
Fine-tuning often operates like a black box, where it's not always clear why the model responds the way it does. This lack of transparency can be a drawback in scenarios where accountability and understanding the decision-making process are crucial.

RAG: Tracing Answers to Sources
Conversely, RAG offers a higher level of transparency and accountability. You can often trace back the model's answers to specific data sources, providing a more transparent view of how the AI arrives at its conclusions.

7. Data Dynamics

Fine-Tuning: Static Snapshot
If your data environment is dynamic and subject to frequent changes, fine-tuning can be challenging to maintain. The fine-tuned model becomes a snapshot of a specific point in time, and keeping it updated with the evolving data can be resource-intensive.

RAG: Real-Time Data Access
RAG, with its real-time data retrieval mechanism, remains updated in sync with the ever-changing data landscape. This makes it an excellent choice for applications operating in dynamic data environments.

In conclusion, the choice between RAG and fine-tuning ultimately depends on the specific needs of your application. Each approach has its unique strengths, and aligning them with your goals and data availability will determine the best fit for enhancing your Language Model. Whether you're seeking real-time external knowledge or customizing linguistic styles, making an informed choice is key to maximizing the potential of your AI solution.

Enhancing Language Models: Choosing Between Retrieval Augmented Generation

In my recent journey of developing various AI solutions powered by Language Models (LLMs), a significant question has emerged: Should we harness the capabilities of Retrieval Augmented Generation (RAG), or should we opt for the path of custom fine-tuning? This decision can profoundly impact the performance and adaptability of our AI systems. Let's delve into the considerations that can guide you in making this pivotal choice.

1. Access to External Data

Fine-Tuning: Relying on Existing Knowledge
Fine-tuning predominantly relies on existing knowledge within the model. It's ideal when your AI system can operate effectively with the data it has been initially trained on. However, it's worth noting that for fine-tuning to incorporate external data effectively, you would need a constantly updated dataset, which can be challenging to maintain, especially for rapidly changing information.

RAG: The Power of External Knowledge
One of the standout advantages of RAG lies in its ability to seamlessly tap into external sources like databases and documents. It's akin to giving your LLM the ability to 'look up' relevant information, enriching its responses with real-world data. When your application demands real-time access to external information, RAG is the go-to choice.

2. Modifying Model Behavior and Knowledge

Fine-Tuning: Tailoring to Your Domain
If you aim to infuse your AI with specific linguistic styles or industry-specific jargon, fine-tuning is your best ally. It allows you to mold the LLM to match your domain's unique tone and expertise. Fine-tuning lets you achieve a high degree of precision in tailoring the model's output to your exact requirements.

RAG: Prioritizing Relevance
On the flip side, RAG excels at retrieving information from external sources and prioritizing relevance. However, it may not always capture the specialized nuances or linguistic idiosyncrasies you desire. It's more about delivering contextually relevant information.

3. Availability of Training Data

Fine-Tuning: Data-Hungry Approach
Fine-tuning thrives on high-quality, domain-specific datasets. To achieve significant improvements in model performance, you'll need ample data with rich details and nuances. If your dataset is limited, fine-tuning might not yield the desired results.

RAG: Less Dependent on Domain-Specific Data
RAG is more forgiving when it comes to the quantity of domain-specific training data. Its real strength lies in its ability to fetch insights from external sources, making it a valuable choice even with limited data.

RAG_Image.png

4. Handling Hallucinations

Fine-Tuning: Addressing Fabrications
While fine-tuning can be directed to reduce such fabrications, it's not a guaranteed solution. It might require additional training and data filtering to minimize these inaccuracies effectively.

RAG: Fact-Driven Reliability
Language models can sometimes generate information that isn't entirely factual, often referred to as "hallucinations." RAG systems, with their retrieval-before-response design, are less prone to this issue. They rely on external data, making them a reliable choice for fact-driven applications where accuracy is paramount.

6. Transparency Needs

Fine-Tuning: The Black Box Conundrum
Fine-tuning often operates like a black box, where it's not always clear why the model responds the way it does. This lack of transparency can be a drawback in scenarios where accountability and understanding the decision-making process are crucial.

RAG: Tracing Answers to Sources
Conversely, RAG offers a higher level of transparency and accountability. You can often trace back the model's answers to specific data sources, providing a more transparent view of how the AI arrives at its conclusions.

7. Data Dynamics

Fine-Tuning: Static Snapshot
If your data environment is dynamic and subject to frequent changes, fine-tuning can be challenging to maintain. The fine-tuned model becomes a snapshot of a specific point in time, and keeping it updated with the evolving data can be resource-intensive.

RAG: Real-Time Data Access
RAG, with its real-time data retrieval mechanism, remains updated in sync with the ever-changing data landscape. This makes it an excellent choice for applications operating in dynamic data environments.

In conclusion, the choice between RAG and fine-tuning ultimately depends on the specific needs of your application. Each approach has its unique strengths, and aligning them with your goals and data availability will determine the best fit for enhancing your Language Model. Whether you're seeking real-time external knowledge or customizing linguistic styles, making an informed choice is key to maximizing the potential of your AI solution.

Parallel Sort

Everyone knows about what sorting is. There are so many computer algorithms that have emerged to support sorting. Some of the well-known algorithms are quick sort, heap sort, merge sort, etc.

All these sorting algorithms work based on sequential sorting. This means a single thread is used to perform the complete sorting operation. However, this approach is quite useful when the requirement is to handle low to medium levels of data. For a huge dataset (in billions), sequential sorting does not scale well. This is where we need to use ‘Parallel sort.’

How to Track Marketing Campaigns for WordPress

How to Track Marketing Campaigns for WordPressCan’t figure out how to track marketing campaigns in WordPress? Don’t worry; you aren’t alone. Measuring the performance of your website can be challenging, especially for beginners. As an entrepreneur, you already have a lot on your plate managing your website. As such, it’s easy for critical tasks like monitoring marketing campaigns to slip through […]

The post How to Track Marketing Campaigns for WordPress appeared first on WPExplorer.

Five Free AI Tools for Programmers to 10X Their Productivity

Artificial intelligence has proliferated across industries and sectors. Be it education, astrology, food and beverages, engineering, or medicine — AI applications are found in all and sundry. Software engineers, programmers, and developers can utilize Artificial Intelligence tools to 10X their productivity with enhanced efficiency. Writing code is an exciting yet tedious job. With the help of AI tools, error-less secured codes can be written fast. AI is here to make the life of our computer programmers conveniently easier with the below-mentioned five free AI tools. 

What Are AI Tools in Programming or Coding?

AI coding tools act as assistants and assist programmers and developers in writing error-free and accurate codes. They are also efficient at completing codes with initial prompts and providing suggestions in real time for betterment.                                                                                                           

OneStream Fast Data Extracts APIs

In modern financial performance management, efficiency, accuracy, and speed in handling vast amounts of data are paramount. OneStream Software, a leader in corporate performance management (CPM) solutions, offers powerful APIs known as the OneStream Fast Data Extracts. This API significantly enhances the process of extracting and utilizing financial and operational data, empowering organizations to make informed decisions swiftly and effectively. 

Understanding the OneStream Fast Data Extract API

The OneStream Fast Data Extract API is a specialized interface that allows seamless integration between OneStream's platform and other applications or systems. It provides a fast and efficient way to extract data from the OneStream application, making it accessible for various purposes such as reporting, analysis, or integration with other software systems.

Revolutionizing Software Testing

This is an article from DZone's 2023 Automated Testing Trend Report.

For more:


Read the Report

Artificial intelligence (AI) has revolutionized the realm of software testing, introducing new possibilities and efficiencies. The demand for faster, more reliable, and efficient testing processes has grown exponentially with the increasing complexity of modern applications. To address these challenges, AI has emerged as a game-changing force, revolutionizing the field of automated software testing. By leveraging AI algorithms, machine learning (ML), and advanced analytics, software testing has undergone a remarkable transformation, enabling organizations to achieve unprecedented levels of speed, accuracy, and coverage in their testing endeavors. 

How Smashing Magazine Uses TinaCMS To Manage An Editorial Workflow

Smashing Magazine is drastically different today than it was just a few years ago, and you may not have even noticed. That’s how it often is with back-end development — the complete architecture changes, yet the front end you see is still very much the same.

You may recall this site was powered by WordPress up until 2019 when the team migrated our large archive of articles, guides, and tutorials to a Jamstack setup. The change was less of a mission than it was an experiment that stuck around. Sure, WordPress is still an incredibly viable CMS, especially for a site like Smashing Magazine that focuses on long-form content. But after seeing a blazing 6× improvement in page speed performance, Jamstack was something we couldn’t dismiss because the faster experience was a clear win for readers like you.

What we may not have expected was how the migration from WordPress to Jamstack would improve our developer experience in the process. We knew for sure that users benefitted from the change, but it wound up making our lives easier as well, as it opened up even more possibilities for what we can accomplish on the site — a real win-win outcome!

It took work to get to where we are today. We went from authoring in WordPress to authoring in Markdown files, so it’s not like we started reaping benefits right away. It is only now that we have integrated TinaCMS to our stack that our entire team is reaping the full benefits of our Jamstack architecture.

That’s really what I want to share in this article: a peek behind Smashing Magazine’s curtains at how we manage content. TinaCMS is not WordPress, so it has influenced the way we work. We think it’s pretty cool because TinaCMS is all about the developer experience in a CMS context, so, of course, the inner developers in us have nerded out over the sorts of things that we are now able to do.

Tina Who?

TinaCMS is not a household name in the CMS space. I’d say that’s likely by design, as its niche is clearly in the developer community rather than a “low-code” offering like WordPress or a completely “no-code” solution like Squarespace. TinaCMS has a clear audience, and the team here at Smashing Magazine just so happens to fit that profile in spades. Not everyone on the team is a developer, but most, if not all, of us are comfortable working in Git and the command line.

TinaCMS can be broadly characterized in two ways: an open-source Git-based CMS that supports Markdown files. In fact, TinaCMS saves content to Markdown, MDX, YML, and JSON, allowing a team like us to query data on top of our static assets. It also creates a GraphQL API for that content, allowing a team like us to query data from our files. And since it’s all connected to a GitHub repo, we own and control everything. That’s an enticing value proposition for a company whose business is content. A self-hosted WordPress instance is similar in that regard, but having all of our content in a centralized repo that contains hard files makes “owning” our content more tangible than it is to store it in an SQL database on some server.

That’s a bit about TinaCMS. It’s designed for Jamstack the same way that you might think of Sanity, Storyblok, or Netlify CMS, but it goes further than what we’ve seen, offering everything from a content API (in GraphQL) and visual editing to an integrated local development workflow that serves us quite well here at Smashing Magazine.

The Current Writing Process

Before we look at TinaCMS’s UI and specific features, I think it’s worth sharing how content is written before it’s published on the site. It’s far from perfect and still a work in progress, but it will give you an idea of how we work and why TinaCMS fits our needs.

There are two paths we follow for writing articles: write in a Markdown file connected to a GitHub repo, or write in a collaborative space, like Dropbox Paper or Google Docs. The path we take is whichever one a contributing author is most comfortable using because both have pros and cons.

To be honest, the process is pretty much the same, no matter which path we use. The author writes something, and an editor on the team reads and edits it. Dropbox Paper exports to Markdown, so it’s really a matter of whether the author prefers a GUI or a code editor for writing. Dropbox Paper might be a little more work because it requires the extra step of exporting content and then cleaning up the file (because export is never perfect).

Once an article reaches its final draft, it is given additional formatting for things like pull quotes and related articles before it is committed to a pull request that, when merged, triggers the site to rebuild itself and deploy the changes.

The New Writing Process

Our new writing process abstracts the old process of having to work in either Markdown or a third-party service. Instead, we get to write directly in the TinaCMS editor, preview our work, hit Publish, and voilà, an article is born.

Tina’s light touch is a big reason why it works for our team. Not everyone is forced to use TinaCMS. For example, Vitaly prefers to write Markdown in his code editor on a local Git branch. No problem. That article can be viewed in TinaCMS once he pushes it to GitHub.

You’re unimpressed, right? If so, that’s good because it’s the ease of this new process that we love so much. There’s nothing inherently impressive about our new process because it sports features we were already using in WordPress before the transition took place. What’s impressive is not the features but that the features are available in our Jamstack architecture.

That’s the third “win” for our team in all of this:

  1. The site’s faster performance is a win for you,
  2. Owning hard files of our content is a win for us, and
  3. The fact that we get to write, edit, and collaborate in a CMS that supports the new architecture is a win for us and authors alike.

It’s truly unique that TinaCMS offers the sorts of features we love about WordPress and has ported them into a Jamstack experience. Other CMSs designed for the Jamstack might offer one or two of the features we wanted, but TinaCMS covers them all. I’ll give you a look at those specific features.

The Editing UI

First off, I think it’s pretty cool that we are essentially creating Markdown files in a CMS editor.

It looks like (classic) WordPress, smells like (classic) WordPress, but produces hard files that get committed directly to our repo.

Like many full-fledged CMSs, Tina supports custom fields. That way, we have an easy way to ensure we’re inputting all the correct content in the correct places. Those fields are mapped in the content API, allowing us to query them to populate the front end. It’s true visual editing in the Jamstack.

Branch Switching & Live Previews

This is a killer feature because it doesn’t require us to deploy anything to generate a preview of an article that we can share with authors for a final editing pass before publishing the article.

How does that work? It’s clever, really. Notice the button in the screenshot indicates we’re on the master branch of our repo. That’s right: we’re fully integrated with GitHub to the extent that we can switch branches. Tina’s preview button integrates with branch deployments offered by services like Netlify, Vercel, and others. For us, that means we can work on a branch and click preview to visit the Netlify preview for that branch. That’s how we’re able to work on an article without it winding up in front of hundreds of thousands of readers.

Working Locally

Another neat thing? We can actually log into the Smashing Magazine admin and choose whether we want to work locally or directly in production.

As long as we have a local version of the site running, we can work in a sandboxed environment that prevents us from publishing accidental changes. Plus, it’s a nice — and safe — way to collaborate with others on the team to get an article prepped in advance of a live preview.

From there, we create a new branch and write to it before putting the content through the editing process, getting a live preview ready, and then merging the branch. That triggers a fresh site build, and everything gets deployed for your reading pleasure.

It’s also worth mentioning that TinaCMS automatically protects the repo’s main branch to prevent us (or, most likely, yours truly) from accidentally writing to it.

The Media Manager

What’s a CMS without a media manager?!

It’s funny, but having a flexible option in a Jamstack-based CMS is harder to find than you might think.

Tina can commit media assets to your repository, but for a site of our scale, that would make our repository unmanageable. Instead, we use Tina’s DigitalOcean Spaces integration. Again, we like the idea of owning all of our content, and integrating it with our media storage solution is important.

Uploading a file, like an image, places it on our DigitalOcean Spaces account. Once the site re-builds itself, the images are optimized and sent off to Cloudinary, which converts the image into several different formats and sizes, serving the most optimal version for the reader based on their device, location, network connection, or whatever.

The Editorial Workflow

All of the features I’ve been writing about are part of the TinaCMS “Editor Workflow” that is new as of July 10 — a mere couple of weeks before I started drafting this article. That’s how fresh all of this is for us, and TinaCMS, for that matter. You might expect a brand-new set of robust features to be a little bumpy at first, but it’s incredibly smooth.

I think a video from the TinaCMS site does a better job illustrating the flow from writing to review, from review to approval, and subsequent post-publish updates.

The Editor Workflow is available but currently implemented as a plugin for Business plans and up rather than having it baked right into TinaCMS. Coming from the WordPress world, I love the concept of keeping the CMS light and extending it with specific functionalities, if needed.

Hope You Enjoyed The Tour

Well, that’s a look at how the sausage is made here at Smashing Magazine. I personally enjoy seeing how things work at different organizations because no two projects are ever identical. What ends up in a stack and how work happens is largely based on specific needs that are unique to a certain team.

What works for us might seem crazy to you — or awesome. I don’t know. But we’re excited about it because it accommodates how we work and has already delivered a number of big wins for everyone.

TinaCMS is in active development, too, so it is very possible we may see new features and functionality that we decide to adopt. For example, there’s now a self-hosted version of the CMS. And looking at the roadmap, we also have more things to look forward to in the next three months.

Further Reading On SmashingMag

Decoding Business Source Licensing: A New Software Licensing Model

Business source licensing (BSL) has recently emerged as an alternative software licensing model that aims to blend the benefits of both open-source and proprietary licensing. For developers and IT professionals evaluating solutions, understanding what BSL is and its implications can help inform licensing decisions.

What Is Business Source Licensing?

Like open-source licensing, BSL makes source code viewable and modifiable. This allows a community of developers to collectively improve the software. However, BSL applies restrictions on how the software can be used commercially. This provides revenue opportunities for the company publishing the software, similar to proprietary licensing.

Edge Computing: The New Frontier in International Data Science Trends

In today's world, technology is evolving at a rapid pace. One of the advanced developments is edge computing. But what exactly is it? And why is it becoming so important? This article will explore edge computing and why it is considered the new frontier in international data science trends.

Understanding Edge Computing

Edge computing is a method where data processing happens closer to where it is generated rather than relying on a centralized data-processing warehouse. This means faster response times and less strain on network resources.

Chris’ Corner: Subgrid

Chrome 117 went stable this past week. There is a website where you can see what the plan is for Chrome releases, by the way, which is handy when you care about such things.

Chrome releases a major version about once a month, and I usually don’t feel ultra compelled to write anything about it specifically. Rachel Andrew does a great job covering web platform updates each month on Web.dev, like this past New to the web platform in August.

I’m extra excited about this one, though, because it means subgrid has now shipped across all three major browsers. Chrome was the straggler here:

  • Firefox shipped subgrid on Dec 2, 2019.
  • Safari shipped subgrid on Sep 11, 2022.
  • Chrome shipped subgrid on Sep 12, 2023.

Caniuse is a great site for not only checking support but also seeing when versions shipped that have support.

Lest I type too many words without explaining what subgrid is… it’s a keyword that works with grid-template-columns and grid-template-rows that allow you to suck in the grid lines that pass through the element from the parent grid.

.parent {
  display: grid;
  grid-template-columns: 1fr 1fr 1fr 1fr 1fr;
}
.child {
  grid-column: 2 / 4;
  display: grid;
  grid-template-columns: subgrid;
}

Does your browser support it? Probably, but it’s still good to check and to code around that check. Bramus has a Pen that’s a quicky check. The CSS feature @supports is up for the job:

output::after {
  content: "❌ Your browser does not support subgrid";
}

@supports(grid-template-rows: subgrid) {
  output::after {
    content: "✅ Your browser supports subgrid";
  }
}

Perhaps the most classic example is when you set card elements on the grid, and you want elements with the cards to line up according to “shared” grid lines. Jhey has a demo like that of the basics.

I’ve also played with the cards idea, which is perhaps even more obvious where there are natural lines, like background colors running into each other:

Sometimes my favorite use cases are little itty bitty tiny things that are otherwise annoying or impossible to pull off well. For example! The aligning off CSS counters on list items. See below how in the first example the content in the list items is ragged-left, but in the second example, nicely aligned. That happens in this case by using subgrid to make all those counters essentially share a column line from the parent list item grid.

That example and several more are from a video I did with Dave a little while ago looking at all sorts of uses for subgrid.

Another of my favorites? Lining up web forms that have variable length labels. That exactly the use case that Eric Meyer showcased when he said that subgrid is “considered essential” seven years ago before subgrid shipped. Eric might have been a little wrong as grid has proven to be pretty dang useful even without subgrid, but there is no doubt that it is even moreso now.

MORE VIDEOS, you say? Can do!

  • I think of Rachel Andrew as the One True CSS Layout Master and she’s got a whole talk dedicated to CSS subgrid, which gets deeper into the details. One little one you might want to know: subgrids inherit the parent grid’s gap, but doesn’t have to!
  • Kevin Powell did a series of videos he called “Subgrid Awareness Month” about a year ago. This one about consistent layouts is a good place to start. CSS grid itself has strong “control the layout from the parent” vibes (unlike flexbox), and subgrid really enhances those powers.

The post Chris’ Corner: Subgrid appeared first on CodePen Blog.

Breach and Attack Simulation Technology (Short Version)

The ever-evolving cybersecurity landscape presents growing challenges in defending against sophisticated cyber threats. Managing security in today's complex, hybrid/multi-cloud architecture compounds these challenges. This article explores the importance of demonstrating cybersecurity effectiveness and the role of Breach and Attack Simulation (BAS) technology.

Challenges in Cybersecurity: