Fame, Difficulty, and Usefulness

Pierre Fermat is best known for two theorems, dubbed his “last” theorem and his “little” theorem. His last theorem is famous, difficult to prove, and useless. His little theorem is relatively arcane, easy to prove, and extremely useful.

There’s little relation between technical difficulty and usefulness.

Pandas Dataframe Functions

Pandas is a Python library that allows users to parse, clean, and visually represent data quickly and efficiently. Here, I will share some useful Dataframe functions that will help you analyze a data set.

First, you have to import the library. Conventionally, we use the alias, "pd," to refer to Pandas.

Component Development With Storybook and KendoReact

Storybook provides an environment for experimenting with UI components in isolation through a variety of different configurations. In this article, I'll show you how to get started with Storybook, highlight a few add-ons, and showcase a Storybook built for KendoReact.

I've been working with Storybook for the past few months. I've found it to be an excellent environment to test UI components in an isolated manner. What exactly is Storybook? From Learn Storybook:

The Browser Wars and the Birth of JavaScript

“Any application that can be written in JavaScript will eventually be written in JavaScript.” — Atwood’s Law, stated by Jeff Atwood in a blog post titled “The Principle of Least Power,” July 17, 2007

Before anything like an Android device or iPhone existed, desktop computers were the battleground for the browser wars. The battle involved billions of dollars invested by a number of companies, all based on the premise that whoever ruled the desktop browser market would own the internet. Today, mobile devices account for nearly half of all website traffic. Back in the 1990s, however, almost all of the action on the web came from desktop machines, and the vast majority of those desktop machines were running some flavor of Windows.

Why Developers Love Visual Studio

About a month ago, I was asked by a fellow developer why developers love Visual Studio and not any of the other IDEs like Visual Studio Code, WebStorm, or Rider? He mentioned that Visual Studio Code should be enough for anybody and added that Visual Studio has become bloated over the years of development.

While I do have copies of Visual Studio Code, WebStorm, and Rider in my possession, I somehow keep coming back to Visual Studio (currently 2019). Today, I wanted to analyze why developers (myself included) keep coming back to Visual Studio and share why developers love it so much.

Input Alert in Objective-C

Introduction

We have seen an input alert in iOS apps that looks like this:

Input Alert In Xcode Using Objective-C

Example input alert


In this article, we will learn how to create an input alert in Objective-C using Xcode. I’ve written this article for beginners. If you need help getting started with Xcode, you can check out my first tutorial here. If you are familiar with Xcode, then add one button, provide touch-up inside, and jump to step seven. 

11 Best Tools for Blockchain Development

Solidity

Solidity is one of the most popular programming languages used in blockchain development. It supports an object-oriented paradigm and is used to write smart contracts. Ethereum dApps can also be coded with Solidity. Solidity is designed to target the Ethereum Virtual Machine (EVM).

So, what makes Solidity so unique? First of all, it is used in one of the most popular blockchain solutions, i.e., Ethereum. Secondly, it can be used to smart contracts that open up a variety of use-cases, especially when it comes to crowdfunding, voting, and multi-signature wallets.

The Future of Security (Part Two)

To understand the current and future state of the cybersecurity landscape we spoke to and received written responses from 50 security professionals. We asked them, "What’s the future of cybersecurity from your perspective?" The most frequent responses focused on AI, ML, and automation, and are shared in Part 1.

DevSecOps

  • Gartner predicts that, by 2020, more than 50 percent of global organizations will be running containerized applications in production. Currently, less than 20 percent of organizations use them for production today. Everyone’s trying to move faster. This increases the risks inherent to moving away from a waterfall pipeline of apps developed entirely in-house to apps developed in cloud-native environments incorporating code written by others and delivered through continuous parallel pipelines.

    That’s why DevOps and information security teams should work together to facilitate security's "Shift Left" to the beginning of the development cycle. Shift Left is a well-understood concept in developer circles, and it needs to become just as familiar from a security perspective in order to identify and remedy potential security issues with applications before they move into production and throughout the development cycle.
  • Encourage developers to adopt DevSecOps technologies and practices (using cloud app sec services and specialized app sec technologies designed for each persona in the Software Life Cycle). Also, grow security champions who will pioneer the best security practices in their DevOps teams.
  • The future of cybersecurity will be the continued integration of security directly into the software development lifecycle. Organizations will continue to adopt DevSecOps processes out of necessity, and as current DevOps and security engineers grow into leadership positions, the DevSecOps philosophy will flourish.
  • There will always be some amount of private data centers. So, while the migration to the cloud has accelerated, there will always be two frontiers for dev/sec/ops to master. Ephemerality will be a primary tenet in cybersecurity.

Threat Vector Expansion

  • Phishing is not going away anytime soonl; it’s an easy entry vector. Most people have hardened perimeters, but the use of so many outside services connected to other networks could compromise your security if those networks become compormised. Make sure you have a robust risk management program in place. Identify risk to the organization before it is implemented. We are seeing more whaling going after CEO, CIO, COO because of the level of access that they have in the network. It is more prevalent than regular phishing, more sophisticated, and more business-like. It’s truly hard to detect.
  • We should certainly expect an increase in the number of endpoints everywhere — employees have gone from mainly using desktops to now working across smartphones and tablets and will likely be adding a slew of IoT devices just around the corner. The sheer quantity of data in play will continue to increase as well. Cybersecurity systems will need to be prepared to face these trends.

Other

  • Cybersecurity solutions will enable unified security and access controls across all systems, no matter what type they are or where they are. We are moving into distributed environments which make it difficult to protect assets one-by-one. There is a need for more unification and a holistic approach to ensuring the safety of our systems and data.
  • We’re not at the Skynet level yet. Very few organizations have a successful foundation for real-time response. It starts with basic security hygiene, patching, and environment management. We have to be drivers to move companies in this direction. GDPR and fines are moving the needle. We now have a monetary value of the risk. Make executive management undersand the potential financial risk of faulty secuirty.
  • I think the future lies in addressing the bigger picture and visualizing an entire kill chain of an attack. Security tools will need to be able to correlate security events from disparate vectors to effectively do this. But it’s certainly a key component to successfully navigating security into the future.
  • More plane crashes if you will. Until individuals are impacted, I do not foresee any legal repercussions to hold people accountable. Learn from safety culture; we doomed to repeat it if we do not. Be transparent and perform root-cause analysis.
  • We need a better understanding of the knowledge architecture behind specific problems and increased capabilities to collect and interpret information from different tools. Automation is an important piece. Especially application security – analysis, prioritization, risk identification, remediation management. Manual analysis does not scale and does not work.
  • In addition to identity management, we’ll see shorter feedback loops to provide feedback while the developer is actively writing code. The development environment needs to be smarter, more contextual; it should ensure that the biggest problems we encounter are not committed in version control.
  • Building a culture of security with the help of tools, processes, and training is the strongest tool organizations have against malware and malicious attackers. In the near future, cybersecurity must have a seat at the table in corporate governance. We’ve seen the rise of the CISO over the past 10 years. This is promising, and companies are starting to view cybersecurity with equal importance to financial audits. If they don’t, they may not be in business in the next 10 years.
  • Endpoint hygiene is getting more emphasis. Going forward, a combination of preventive and reactive measures to harden endpoints where 80 percent of attacks take place. We need to focus on preventing attacks; when one does get through, we need to ensure we have the right reactive tools in place to deep dive on those hacks. 
  • We need a more vertical focus. Specialization and custom tailoring security services to meet the needs of a specific market is a key trend. The breadth of security features a customer needs is so broad. The only effective way for the model to evolve is for more services to be delivered from the cloud. 
  • Security has been doing the same things that haven’t been working for the last 50 years. If we want this to change, we need to take a more disciplined and process-based approach with our security programs. Once we have that in place, then we need to automate. The rise of APIs, along with automation and orchestration, will allow security to move in the direction that simply wasn’t possible before. 
  • With the use of SSL certificates on websites finally becoming standard, people are recognizing the need for strong encryption of data in transit. Virtual private networks (VPNs) take this concept a step further by routing all traffic through an encrypted network. This protects your information from prying eyes, as it travels through the open internet. I foresee the widespread adoption of VPN technology in the near future, as more consumers become educated about the risks they face on the internet.
  • The security industry as a whole is still thinking about perimeter protection around defined boundaries. Meanwhile, the workforce is shifting to remote work, and companies are migrating their infrastructures from self-managed to public clouds. All of these changes mean more challenges for securing corporate data because boundaries are becoming less defined. Assumptions and decisions regarding which users are authorized to access what data with which devices will have to evolve to meet these boundary changes. I predict more organizations will look into implementing a Zero Trust model in the next five years.
  • In the future, we’ll continue to see the pendulum swing faster toward a strong security posture. Cyber hygiene practices (e.g. Zero Trust) will continue to expand and demonstrate their impact with IT and security teams. Additionally, the focus on cyber hygiene will free up many of the wastes in security architectures and assets, removing much of the agent and tool bloat.
  • The myriad of emerging threats has created the perfect storm for years ahead. From never-before-seen attacks on newly engineered biometric markers and the broad embrace of blockchain, to expanded risks posed for “new” critical infrastructure and the transfer of trust, organizations must look to the threat horizon and accelerate and collaborate to out-innovate and out-maneuver the attackers.

    As an industry, we’re getting better at prioritizing the security measures that will have the most impact on security posture and measurably reduce the most risk across the enterprise. If more organizations continue to take this approach, they’ll be able to keep up with the constantly changing threat landscape. Further, many of the most noteworthy recent breaches we saw were the direct result of unsecured sensitive information living in public repositories, especially at companies using DevOps and the cloud to bring new applications to market at high velocity. Attackers are taking advantage of the failure of public and private organizations to implement basic security practices securing privileged access; it's becoming an epidemic.

    In the future, we’ll see major public repositories start introducing sophisticated guardrails designed to prevent developers from accidentally uploading security secrets. Organizations, however, can't rely on these safeguards. It's critical that they institutionalize a security-first culture in which everyone — not just developers — is empowered to "own" security and are provided with the tools and solutions needed to make it easier to keep networks secure without impacting DevOps workflows.

Please see part one for thoughts on the future of security around AI, Ml, and automation.

What You Need To Know About pBFT Consensus

For many, blockchain is a fairly new technology whose applications have yet to be explored. Despite this, it is difficult to find an area in which blockchain and a distributed ledger could not be applied. For example, IoT networks of smart devices can make it possible to create cost-efficient, fully networked, and secure smart cities using the blockchain.

Distributed ledger technologies solve a plethora of problems, including increased privacy, enhanced trust in the network, increased speed, and reduced cost of transactions.

How Google PageSpeed Works: Improve Your Score and Search Engine Ranking

This article is from my friend Ben who runs Calibre, a tool for monitoring the performance of websites. We use Calibre here on CSS-Tricks to keep an eye on things. In fact, I just popped over there to take a look and was notified of some little mistakes that slipped by, and I fixed them. Recommended!

In this article, we uncover how PageSpeed calculates it’s critical speed score.

It’s no secret that speed has become a crucial factor in increasing revenue and lowering abandonment rates. Now that Google uses page speed as a ranking factor, many organizations have become laser-focused on performance.

Last year, Google made two significant changes to their search indexing and ranking algorithms:

From this, we’re able to state two truths:

  • The speed of your site on mobile will affect your overall SEO ranking.
  • If your pages load slowly, it will reduce your ad quality score, and ads will cost more.

Google wrote:

Faster sites don’t just improve user experience; recent data shows that improving site speed also reduces operating costs. Like us, our users place a lot of value in speed — that’s why we’ve decided to take site speed into account in our search rankings.

To understand how these changes affect us from a performance perspective, we need to grasp the underlying technology. PageSpeed 5.0 is a complete overhaul of previous editions. It’s now being powered by Lighthouse and CrUX (Chrome User Experience Report).

This upgrade also brings a new scoring algorithm that makes it far more challenging to receive a high PageSpeed score.

What changed in PageSpeed 5.0?

Before 5.0, PageSpeed ran a series of heuristics against a given page. If the page has large, uncompressed images, PageSpeed would suggest image compression. Cache-Headers missing? Add them.

These heuristics were coupled with a set of guidelines that would likely result in better performance, if followed, but were merely superficial and didn’t actually analyze the load and render experience that real visitors face.

In PageSpeed 5.0, pages are loaded in a real Chrome browser that is controlled by Lighthouse. Lighthouse records metrics from the browser, applies a scoring model to them, and presents an overall performance score. Guidelines for improvement are suggested based on how specific metrics score.

Like PageSpeed, Lighthouse also has a performance score. In PageSpeed 5.0, the performance score is taken from Lighthouse directly. PageSpeed’s speed score is now the same as Lighthouse’s Performance score.

Calibre scores 97 on Google’s Pagespeed

Now that we know where the PageSpeed score comes from, let’s dive into how it’s calculated, and how we can make meaningful improvements.

What is Google Lighthouse?

Lighthouse is an open source project run by a dedicated team from Google Chrome. Over the past couple of years, it has become the go-to free performance analysis tool.

Lighthouse uses Chrome’s Remote Debugging Protocol to read network request information, measure JavaScript performance, observe accessibility standards and measure user-focused timing metrics like First Contentful Paint, Time to Interactive or Speed Index.

If you’re interested in a high-level overview of Lighthouse architecture, read this guide from the official repository.

How Lighthouse calculates the Performance Score

During performance tests, Lighthouse records many metrics focused on what a user sees and experiences. There are six metrics used to create the overall performance score. They are:

  • Time to Interactive (TTI)
  • Speed Index
  • First Contentful Paint (FCP)
  • First CPU Idle
  • First Meaningful Paint (FMP)
  • Estimated Input Latency

Lighthouse will apply a 0 – 100 scoring model to each of these metrics. This process works by obtaining mobile 75th and 95th percentiles from HTTP Archive, then applying a log normal function.

Following the algorithm and reference data used to calculate Time to Interactive, we can see that if a page managed to become "interactive" in 2.1 seconds, the Time to Interactive metric score would be 92/100.

Once each metric is scored, it’s assigned a weighting which is used as a modifier in calculating the overall performance score. The weightings are as follows:

Metric Weighting
Time to Interactive (TTI) 5
Speed Index 4
First Contentful Paint 3
First CPU Idle 2
First Meaningful Paint 1
Estimated Input Latency 0

These weightings refer to the impact of each metric in regards to mobile user experience.

In the future, this may also be enhanced by the inclusion of user-observed data from the Chrome User Experience Report dataset.

You may be wondering how the weighting of each metric affects the overall performance score. The Lighthouse team have created a useful Google Spreadsheet calculator explaining this process:

Picture of a spreadsheet that can be used to calculate performance scores

Using the example above, if we change (time to) interactive from 5 seconds to 17 seconds (the global average mobile TTI), our score drops to 56% (aka 56 out of 100).

Whereas, if we change First Contentful Paint to 17 seconds, we’d score 62%.

TTI is the most impactful metric to your performance score. Therefore, to receive a high PageSpeed score, you will need a speedy TTI measurement.

Moving the needle on TTI

At a high level, there are two significant factors that hugely influence TTI:

  • The amount of JavaScript delivered to the page
  • The run time of JavaScript tasks on the main thread

Our Time to Interactive guide explains how TTI works in great detail, but if you’re looking for some quick no-research wins, we’d suggest: Reducing the amount of JavaScript

Where possible, remove unused JavaScript code or focus on only delivering a script that will be run by the current page. That might mean removing old polyfills or replacing third-party libraries with smaller, more modern alternatives.

It’s important to remember that the cost of JavaScript is not only the time it takes to download it. The browser needs to decompress, parse, compile and eventually execute it, which takes non-trivial time, especially in mobile devices.

Effective measures for reducing the amount of scripts from your pages include:

  • Reviewing and removing polyfills that are no longer required for your audience.
  • Understanding the cost of each third-party JavaScript library. Use webpack-bundle-analyser or source-map-explorer to visualise the how large each library is.
  • Modern JavaScript tooling (like webpack) can break-up large JavaScript applications into a series of small bundles that are automatically loaded as a user navigates. This approach is known as code splitting and is extremely effective in improving TTI.
  • Service workers that will cache the bytecode result of a parsed and compiled script. If you’re able to make use of this, visitors will pay a one-time performance cost for parse and compilation. After that, it’ll be mitigated by cache.

Monitoring Time to Interactive

To successfully uncover significant differences in user experience, we suggest using a performance monitoring system (like Calibre!) that allows for testing a minimum of two devices; a fast desktop and a low-mid range mobile phone.

That way, you’ll have the data for both the best and worst case of what your customers experience. It’s time to come to terms that your customers aren’t using the same powerful hardware as you.

In-depth manual profiling

To get the best results in profiling JavaScript performance, test pages using intentionally slow mobile devices. If you have an old phone in a desk drawer, this is a great second-life for it.

An excellent substitute for using a real device is to use Chrome DevTools hardware emulation mode. We’ve written an extensive performance profiling guide to help you get started with runtime performance.

What about the other metrics?

Speed Index, First Contentful Paint and First Meaningful Paint are all browser-paint-based metrics. They’re influenced by similar factors and can often be improved at the same time.

It’s objectively easier to improve these metrics as they are calculated by how quickly a page renders. Following the Lighthouse Performance audit rules closely will result in these metrics improving.

If you aren’t already preloading your fonts or optimizing for critical requests, that is an excellent place to start a performance journey. Our article titled "The Critical Request" explains in great detail how the browser fetches and renders critical resources used to render your pages.

Tracking your progress and making meaningful improvements

Google's newly updated search console, Lighthouse and PageSpeed Insights, are a great way to get initial visibility into the performance of your pages but fall short for teams that need to continuously track and improve the performance of their pages.

Continuous performance monitoring is essential to ensuring speed improvements last, and teams get instantly notified when regressions happen. Manual testing introduces unexpected variability in results and makes testing from different regions as well as on various devices nearly impossible without a dedicated lab environment.

Speed has become a crucial factor for SEO rankings, especially now that nearly 50% of web traffic comes from mobile devices.

To avoid losing your search positioning, ensure you're using an up-to-date performance suite to track key pages. (Pssst, we built Calibre to be your performance companion. It has Lighthouse built-in. Hundreds of teams from around the globe are using it every day.)

Related Articles

The post How Google PageSpeed Works: Improve Your Score and Search Engine Ranking appeared first on CSS-Tricks.

What I Like About Vue

Dave Rupert digs into some of his favorite Vue features and one particular issue that he has with React:

I’ve come to realize one thing I don’t particularly like about React is jumping into a file, reading the top for the state, jumping to the bottom to find the render function, then following the method calls up to a series other sub-rendering functions only to find the component I’m looking for is in another castle. That cognitive load is taxing for me.

I wrote about this very problem recently in our newsletter where I argued that finding my way around a React component is difficult. I feel like I have to spend more energy than necessary figuring out how a component works because React encourages me to write code in a certain way.

On the other hand, Dave, says that Vue matches his mental model when authoring components:

<template>
  // Start with a foundation of good HTML markup 
</template>
<script>
  // Add interaction with JavaScript
</script>
<style>
  // Add styling as necessary. 
</style>

And this certainly matches the way I think about things, too.

Direct Link to ArticlePermalink

The post What I Like About Vue appeared first on CSS-Tricks.

Collective #535




Divi

Our Sponsor

Divi: The Powerful Visual Page Builder

Divi is a revolutionary WordPress theme and visual page builder for WordPress. With Divi, you can build your website visually. Add, arrange and design content and watch everything happen instantly right before your eyes.

Try it




C535_rooki

Rooki

Rooki is an online magazine for students, interns and juniors with intimate interviews, free design student awards and free resources.

Check it out






C535_zindex

Index fun

A very interesting analysis on which z-index values are used on websites. By Philippe Suter.

Read it






C535_multistep

Progress Tracker

An updated HTML component to illustrate the steps in a multi step process e.g. a multi step form, a timeline or a quiz.

Check it out






C535_fontmur

Free Font: Le Murmure

Recently expanded with Cyrillic, Le Murmure is a custom open-source typeface designed by Jérémy Landes for the design agency Murmure and released by Velvetyne Type Foundry.

Get it




Collective #535 was written by Pedro Botelho and published on Codrops.

15 CSS Background Effects

Did you know that you can use CSS to create beautiful animations and interesting effects? Combined with HTML and JavaScript, or even on its own, CSS can be extremely powerful. You’d be surprised at what developers can create.

From simple scrolling animations to complex environments built entirely of code, these effects can add a lot of personality to your website.

What if you could use CSS backgrounds created by others for free? Sites like CodePen were made to host open source or other freely-licensed code, which means you can use it in your own projects with few-to-no stipulations.

This is also helpful for designers who want to learn CSS or pull off a similar, but personalized look. You can use these code snippets as a base to create your own effects.

There are a ton of developers who have created amazing CSS backgrounds and released them for free. Today we’ve collected 15 of the most stunning ones. See for yourself what you can do with a creative mind and a little code!

UNLIMITED DOWNLOADS: Email, admin, landing page & website templates




Parallax Pixel Stars

See the Pen
Parallax Star background in CSS
by Saransh Sinha (@saransh)
on CodePen.

Gradient Background Animation

See the Pen
Pure CSS3 Gradient Background Animation
by Manuel Pinto (@P1N2O)
on CodePen.

Frosted Glass Effect

See the Pen
CSS only frosted glass effect
by Gregg OD (@GreggOD)
on CodePen.

Shooting Star

See the Pen
Only CSS: Shooting Star
by Yusuke Nakaya (@YusukeNakaya)
on CodePen.

Fixed Background Effect

Example of Fixed Background Effect

Tri Travelers

See the Pen
Tri Travelers
by Nate Wiley (@natewiley)
on CodePen.

ColorDrops

See the Pen
ColorDrops
by Nate Wiley (@natewiley)
on CodePen.

Animated Background Header

See the Pen
Animated Background Header
by Jasper LaChance (@jasperlachance)
on CodePen.

Zero Element: DeLight

See the Pen
Zero Element: DeLight
by Keith Clark (@keithclark)
on CodePen.

Glowing Particle Animation

See the Pen
CSS Glowing Particle Animation
by Nate Wiley (@natewiley)
on CodePen.

Background Image Scroll Effect

See the Pen
Pure CSS Background Image Scroll Effect
by carpe numidium (@carpenumidium)
on CodePen.

Multiple Background Image Parallax

See the Pen
CSS Multiple Background Image Parallax Animation
by carpe numidium (@carpenumidium)
on CodePen.

Bokeh Effect

See the Pen
Bokeh effect (CSS)
by Louis Hoebregts (@Mamboleoo)
on CodePen.

Calm Breeze Login Screen

See the Pen
Calm breeze login screen
by Lewi Hussey (@Lewitje)
on CodePen.

Effect Text Gradient

See the Pen
Effect Text Gradient
by Diogo Realles (@SoftwaRealles)
on CodePen.

Creatively Beautiful CSS Backgrounds

Good web design leaves a lasting impression on visitors, and there’s always something extra enchanting about a well-made animation. Whether you go all out with an animated starry or gradient background, or you just add some elegant and subtle parallax effects to your site, it can do wonders for your design.

CodePen hosts exclusively open source code, made by developers as a contribution to the community. So, if one of these effects caught your eye, feel free to copy it, tweak it, or use it as a base for making your own CSS animations.

Just remember to use the same license, and everything on CodePen is free to use.

5 Reasons to Model During QA, Part 1/5: “Shift Left” QA Uproots Design Defects

Model-Based Testing (MBT) itself is not new, but Model-Based Test Automation is experiencing a resurgence in adoption. Model-Based Testing is the automation technique with the greatest current business interest according to the 2018 World Quality Report, with 61% of respondents stating that they can foresee their organization adopting it in the coming year. [1]

Technologies like The VIP Test Modeler have significantly reduced the time and technical knowledge needed to model complex systems. Organizations can now enjoy all the benefits of Model-Based techniques within short iterations, whereas previously modeling had been reserved for only the most high-stake projects, such as lengthy Waterfall projects in aerospace and defense.

The Role of QA in Business Continuity

You’re ready to modernize your business, but you worry that a disruptive digital transformation will lead to downtime, unforeseen costs, and broken applications.

It’s a common quagmire in this new "age of the customer," where consumers expect more and more from companies’ digital experiences. And you need to keep up with demand: 73% of consumers will jump ship to a competitor if a website is slow (Econsultancy). What do you think will happen if your mobile app crashes?

DevOps in Financial Services: Be Like Messi

“DevOps” continues to be the buzzword across technology departments in financial institutions. It means different things to different people, regularly including automation, change management, deployment, continuous delivery, culture change. Essentially, an “easy” fix where Development and Operations collaborate to save cost, improve productivity, and lower risk.

In practice, success rates in the DevOps transformation journey vary dramatically across the industry. Delivery managers tasked with scaling an organization’s capabilities have faced challenges in cultural resistance, disparate tooling, complex procedural changes, deep-rooted silos, and conflicting advice/recommendations across the industry.

A Frank Discussion With Your Manager About Why You Need to Go to DevOps World | Jenkins World 2019

Do you work in sales? If you’re part of a team in a DevOps environment, your answer is an immediate “no.” As an engineer, testing professional, or operations leader, you are performing technical tasks that help move high-quality software through the pipeline and out into production.

Still, let’s face it, part of all of our jobs is selling something to somebody. You may have to coax a teammate to approach coding a certain feature in a different way than they had planned or convince them to run one last QA test for you...again. You certainly also face situations when you have to sell your boss that an idea you came up with might just help the team do a better job on an important task.

Cloud vs. On-Premise Software Deployment – What’s Right for You?

In the modern world of enterprise IT, cloud computing has become an indispensable tool for the integration of outside services through remote servers handling requests and responses for the data that drives our lives. However, not too long ago, integrating with third-party services meant housing servers on-site and maintaining those connections yourself. This is referred to as On-Premise (on-prem) and is still a viable means for integrating the data that contributes to your application’s functionality.

Unsurprisingly, there are benefits and drawbacks to both means of integrating software and services into your codebase. In the following article, we’ll discuss some of the pros and cons to both cloud and on-prem, and try to give you a better idea of what you should look for when building out your application.