Ant is a bit of a mystery bag. Its behavior is often obscure until you come to look at its code. Then you find that it consists of a number of fairly simple facilities that are often explained from a bottom-up detailed and technical viewpoint and not from a top-down architectural perspective. This article aims to provide the missing top-down view. It is targeted at an audience of software engineers. Armed with this article, and some solid opinions on when and when not to use this tool, you should be able to find your way in the anthill.
Ant History, Legacy, and Impact
If you have never heard of Ant, don't worry. Ant is yet ANother Tool in the realm of building software. It was the first attempt at a build tool for Java. When it was conceived, XML was all the rage, and C was the way to express software that Java mimicked. Consequently, Ant was influenced by the thinking of C's make tool. Combining these two trends, Ant is a curious hybrid that does not have make's terseness or stringent reproducibility but does have XMLs verbose syntax. Talk about the best of both worlds.
Java developers are particularly spoiled when using Hazelcast. Because Hazelcast is developed in Java, it's available as a JAR, and we can integrate it as a library in our application. Just add it to the application's classpath, start a node, and we're good to go. However, I believe that once you start relying on Hazelcast as a critical infrastructure component, embedding limits your options. In this post, I'd like to dive a bit deeper into the subject.
Starting With Embedded
As mentioned above, the easiest way for Java developers to start their journey using Hazelcast is to embed it in their application like any other library. During the application startup lifecycle, we just have to call Hazelcast.newHazelcastInstance(): this will start a new Hazelcast node in the currently running JVM. Because of Hazelcast auto-discovery capabilities, without further configuration, nodes will discover each other and form a cluster. In a couple of minutes of development time, we can create a distributed In-Memory Data Grid. Hard to do better in terms of Developer Experience!
Let’s rewind a bit. The cloud emerged somewhere in the mid-2000s. Before its development, enterprises relied on their own infrastructure to house everything they needed for software, small business apps, or programs. Engineers were required to manage the hardware and software.
Put yourself in the engineers’ shoes, other than development, you still need to mind the integrity of your infrastructure. This includes servers, networks, storage, services, and application. Managing the hardware, let alone the software, is an expensive process that requires skilled technicians.
In my previous post on AWS Elastic Compute Cloud (EC2) Basics, we launched two EC2 instances. One is a public subnet and one is a private subnet. With security groups configured, we were able to SSH to EC2 in a public subnet.
In this post, we will continue and set up Bastion Host and NAT instances in our VPC. We will learn why we need those and some of the options available to us.
In our previous article from this series we shared a look at the logical common architectural elements found in point of sale imaging solution for retail stores.
The process was laid out how we've approached the use case and how portfolio solutions are the base for researching a generic architectural blueprint.
If you’re a huge fan of modern architecture due to its innovative and futuristic qualities, it’s time you’ve learned Sou Fujimoto’s name. The Japanese architect is breaking boundaries with each new...
In my previous article from this series shared a look at the logical common architectural elements found in supply chain integration for retail stores.
The process was laid out how I've approached the use case and how portfolio solutions are the base for researching a generic architectural blueprint.
It started with laying out the process of how I've approached the use case by researching successful customer portfolio solutions as the basis for a generic architectural blueprint.
You’ll have to take a double look at British sculptor Alex Chinneck’s buildings to make sure they’re real. Houses that melt, mansions that levitate, upside-down towers, and...
Today we’d like to present a blueprint for large Vue JS projects. It uses the new and exciting Vite build tool and Lerna monorepo manager. I’ve built large enterprise projects in a similar way, using Angular, Vue JS, webpack, and rollup. Vite, created by the Vue JS team, looks very promising so I wanted to give it a try.
There are plenty of Vite tutorials and demos, mostly the usual Hello World and Todo apps. But I needed something more useful. I wanted to see whether Vite can replace rollup and webpack in large real-life projects.
Components are great, aren’t they? They are these reusable sources of truth that you can use to build rock-solid front-ends without duplicating code.
You know what else is super cool? Headless content management! Headless content management system (CMS) products offer a content editing experience while freeing that content in the form of data that can be ported, well, to any API-consuming front-end UI. You can structure your content however you’d like (depending on the product), and pull that content into your front-end applications.
Using these two things together — a distributed CMS solution with component-based front-end applications — is a core tenet of the Jamstack.
But, while components and headless CMSs are great on their own, it can be difficult to get them to play nicely together. I‘m not saying it‘s difficult to hook one up to the other. In a lot of cases, it’s actually quite painless. But, to craft a system of components that is reusable and consistent, and to have that system maintain parity with a well-designed CMS experience is a difficult thing to achieve. It’s that win-win combo of being able to freely write content and then have that content structured into predictable components that makes headless content management so appealing.
Achieving parity between a CMS and front-end components
My favorite demonstrating this complexity is a simple component: a button. Let‘s say we’re working with React to build components and our button looks like this:
<Button to="/">Go Home</Button>
In the lovely land of React, that means the <Button> component has two props (i.e. properties, attributes, arguments, etc.) — to and children. children is a React thing that holds all the content within the opening and closing tags, which is “Go Home” in this case.)
If we’re going to enable users in the content editor to add buttons to the site, we want a system for them that makes it easy to understand how their actions in the CMS affect what appears on screen in the front-end app. But we also want our developer(s) to work productively with component properties that make sense to them and within the framework they’re working (i.e. React in our example).
How do we do that?
We could…
…use fields in the CMS that match the components’ properties, though I’ve had little success with this approach. to and children don‘t make much sense to content editors trying to build a button. Believe me, I‘ve tried. I‘ve tried with beginners and experienced editors alike. I‘ve tried helper text. It doesn’t matter. It’s confusing.
What makes more sense is using words editors are more likely to understand, like label or text for children and url for to.
But then we’d be out of sync with our code.
Or what if we…
…masked attributes in the CMS. Most headless CMS solutions enable you to have a different value for the label of the field than the name that is used when delivering content via an API.
We could label our fields Label and URL, but use children and to as the names. We could. But we probably shouldn’t. Remember what Ian Malcolm said?
On the surface, masking attributes makes sense. It’s a separation of concerns. The editors see something that makes them happy and productive, and the developers work with the names that make sense to them. I like it, but only in theory. In practice, it confuses developers. Debugging a content editor issue often requires digging through extra layers (i.e. time) to find the relationship between labels and field names.
Or why not …
…change the properties. Wouldn’t it be easier for developers to be flexible? They’re the ones designing the system, after all.
Yes, that’s true. But if you follow that rule exclusively, it’s inevitable that you’re going to run into some issue along the way. You’ll likely end up fighting against the framework, or props will just feel goofy.
In our example, using label and url as props for a button works totally fine for data that originates from the CMS. But that also means that any time our developers want to use a button within the code, it looks like this:
<Button label="Go Home" url="/" />
That may seem okay on the surface, but it significantly limits the power of the button. Let’s say I want to support some other feature, like adding an icon within the label. I’m going to need some additional logic or another property for it. If I would have used React’s children approach instead, it would have just worked (likely after some custom styling support).
Okay, so… what do we do?
Introducing transformers
The best approach I’ve found is to separately optimize the editor and developer experiences. Craft a CMS experience that is catered to the editors. Build a codebase that is easy for developers to navigate, understand, and enhance.
The result is that the two experiences will not be in parity with one another. We need some set of utilities to transform the data from the CMS structure into something that can be used by the front-end, regardless of the framework and tooling you’re using.
I call these utilities transformers. (Aren’t I so good at naming things!?) Transformers are responsible for consuming data from your CMS and transforming it into a shape that can be easily consumed by your components.
While I‘ve found that transforming data is the smoothest means to get great experiences in both the CMS and the codebase, I don‘t have an obvious solution for how (or perhaps where) those transformations should happen. I‘ve used three different approaches, all of which have their pros and cons. Let’s take a look at them.
1. Alongside components
One approach is to put transformers right alongside the components they are serving. This is the approach I typically take in organizing component-based projects — to keep related files close to one another.
That means that I often have a directory for every component with a predictable set of files. The index.js acts as the controller for the component. It is responsible for importing and exporting all other relevant files. That makes it trivial to wrap the component with some logic-based behavior. In other words, it could transform properties of the component before rendering it. Here’s what that might look like for our button example:
import React from "react"
import Component from "./component"
import transform from "./transformer"
const Button = props => <Component {...transform(props)} />
export default Button
In this example, if to and children were properties sent to the component, it works just fine! But if label and url were used instead, they are transformed to children and to. That means the <Button> component (component.js) only has to worry about using children and to.
const Button = ({ children, to }) => <a href={to}>{children}</a>
I personally love this approach. It keeps the logic tightly coupled with the component. The biggest downside I‘ve found thus far is that it’s a large number of files and transforms, when the entire dataset for any given page could be transformed earlier in the stack, which would be…
2. At the top of the funnel
The data has to be pulled into the application via some mechanism. Developers use this mechanism to retrieve as much data for the current page or view as possible. Often, the fewer number of queries/requests a page is required to make, the better its performance.
In other words, that mechanism often exists near the top of the funnel (or stack), as opposed to each component pulling its own data in dynamically. (When that’s necessary, I use adapters.)
The mechanism that retrieves the page data could also be responsible for transforming all the data for the given page before it renders any of its components.
In theory, this is a better approach than the first one. It decreases the amount of work the browser has to do, which should improve the front-end performance. That means the server has to do more work, but that’s often a better choice.
In practice, though, this is a lot of work. Data structures can be big, complex, and interwoven. It can take a heck of a lot of work to transform everything into the right format at the top of the funnel, and then pass the transformed data down to components. It’s also more difficult to test because of the potential complexity and variation of the giant data blob retrieved at the top of the stack. With the first approach, testing the transformer logic for the button is trivial. With this approach, you’d want to account for transforming button data anywhere that it might appear in the retrieved data object.
But, if you can pull it off, this is generally the better approach.
3. The middleman engine
The third and final (and magical) approach is to do all this work somewhere else. In this case, we could build an engine (i.e. a small application) that would do the transformations for us, and then make the content available for the application to consume.
This is likely even more work than the second approach. And it has added cost and maintenance in running an additional application, which takes more effort to ensure it is rock solid.
The major upside to this approach is that we could build this as an abstracted engine. In other words, any time we bring in data to any front-end application, it goes through this middleman engine. That means if we have two projects that use the same CMS or data source, our work is cut down significantly for the second project.
If you aren‘t doing any of this today and want to start, my advice is to treat these approaches like stepping stones. They grow in complexity and maintenance and power as the application grows. Start with the first approach and see how far that gets you. Then, if you feel like you could benefit from a jump to the second, do it! And if you’re feeling like living dangerously, go for the third!
In the end, what matters most is crafting an experience that both your editors and your developers understand and enjoy. If you can do that, you win!
This is the ninth article documenting what I’ve learned from a series of 13 Trailhead Live video sessions on Modern App Development on Salesforce and Heroku. In these articles, we’re focusing on how to combine Salesforce with Heroku to build an “eCars” app—a sales and service application for a fictitious electric car company (“Pulsar”). eCars allows users to customize and buy cars, service techs to view live diagnostic info from the car, and more. In case you missed my previous article, you can find it here.
Just as a quick reminder: I’ve been following this Trailhead Live video series to brush up and stay current on the latest app development trends on these platforms that are key for my career and business. I’ll be sharing each step for building the app, what I’ve learned, and my thoughts from each session. These series reviews are both for my own edification as well as for others who might benefit from this content.
When you hear us talking about how great serverless is, or read about other companies adopting the technology, you might be wondering if you are behind the times for not adopting 100% serverless architecture.
You can run your Selenium-based test automation scripts on the browser you want and distribute it with the Selenium Grid project. This is actually a very simple and easy solution.
According to the architecture, thanks to the Selenium grid project, you raise a hub and register different machines to this hub as a node. In this way, the hub directs you according to which browser or operating system you will run. If that node is used by a different test, it will queue you up and continue running the next test when it is finished.
A component architecture is a type of application architecture composed of independent, modular, and reusable building blocks called components. When designing an app following component-based architecture principles, developers combine, reuse, and version these objects, rather than building every inch of an app from scratch.
Let’s be honest: in times of uncertainty where speed is paramount, meeting the increasing app demand while maintaining complex technology is a herculean task for any development team. The value proposition of a component-based architecture is that it boosts application development time and reduces code fragmentation.