Dynamsoft's Latest Dynamic Web TWAIN SDK Boosts Scaning and Uploading

Dynamsoft's latest (and 15th version) of its Dynamic Web TWAIN document scanning software development kit (SDK) has a faster way to initiate scan jobs and more equal functionalities for Windows and macOS editions. The SDK is widely used to expedite adding an online document scanner component to document management applications running in popular Internet browsers.

Vericred Launches Disruption Analysis API

Vericred, a company that provides the infrastructure for the digital distribution of health insurance, recently launched its Disruption Analysis API. The API allows users to assess the disruption (positive or negative) that would take place by switching a health or dental insurance plan for a certain group of employees. The company anticipates that the likely users for the API are employer benefits departments and insurance brokers, and the InsurTech companies that build tools for these users.

Newspack Opens Up Application Process for Phase Two

Earlier this year, Newspack chose twelve publications to take part in the initial rollout phase of the platform. Newspack is a collection of themes, plugins, and features geared towards newsrooms such as revenue generation wizards, mobile delivery, and search engine optimization.

Steve Beatty, head of Newspack Communication says they’re seeking up to 50 newsrooms to be part of phase two which lasts from September 1st – February 29th, 2020.

“What you’ll get: a new Newspack website, including the migration of your existing site; free hosting, security, updates, backups and support on WordPress.com through February 2020; membership in the Newspack community of users; access to Newspack developers; exclusive performance benchmarking against your peers; and more,” Beatty said.

Organizations that are selected are expected to provide feedback, test new features, and help shape the overall direction of the platform.

Free hosting for charter members will expire on February 29th, 2020. News organizations with revenue under $500K can expect to pay $1,000 per month and organizations that generate revenue of over $500K will pay $2,000 per month. Newspack is currently in negotiations to provide subsidies for organizations that encounter difficulties with the pricing structure.

Those interested in participating in the charter program have until August 15th to fill out the application.

RedisTimeSeries GA: Making the 4th Dimension (in Redis) Truly Immersive

On the 27th of June, we announced the general availability (GA) of RedisTimeSeries v1.0. RedisTimeSeries is a Redis module developed by Redis Labs to enhance your experience managing time series data with Redis. We released RedisTimeSeries in preview/beta mode over six months ago and appreciate all the great feedback and suggestions we received from the community and our customers as we worked together on this first GA version. To mark this release, we performed a benchmark, which achieved 125K queries per second with RedisTimeSeries as compared to other time series approaches in Redis. Skip ahead for the full results, or take a moment to first learn about what led us to build this new module.

Why RedisTimeSeries?

Many Redis users have been using Redis for time series data for almost a decade and have been happy and successful doing so. As we will explain later, these developers are using the generic native data structures of Redis. So let’s first take a step back to explain why we decided to build a module with a dedicated time series data structure.

MySQL Database Table Data Purge/Removal Using MySQL Event Scheduler

In this article, let's look at

  1. Deleting table data in batches/chunks
  2. Logging each iteration
  3. Handling and logging errors
  4. Creating a recurring event to cleanup/purge table data regularly

Recently, I was working on creating a utility to purge the table data of MySQL Database. In this post, I will be sharing my experience of how we can create a recurring event in MySQL to purge/remove the table data.

The Ultimate Guide To Graphic Design Basics For The Creative Eye

Do you want to become a graphic designer and use your creative eye to influence big businesses? Here is your ultimate guide to the graphic design basics, from how the career took off and what it’s turned into today to how you can become a designer and more.  If you have a creative eye and...

The post The Ultimate Guide To Graphic Design Basics For The Creative Eye appeared first on DesignrFix.

APIs = Access

To gather insights on the current and future state of API management, we asked IT professionals from 18 companies to share their thoughts. We asked them, "How does your company use APIs?" Here's what they told us:

Data Access

  • More data sources are being accessed through APIs. Communicating with APIs for reading and writing data back to those systems. At the core how we develop applications, manage DevOps and communicate with source systems.
  • APIs are quickly becoming the de facto way organizations deliver value in a digital world. Companies create discrete applications and data sets and expose them as a series of API-enabled services. The customer experience is then highly dependent on the strength of the underlying API architecture. We’ve seen this trend before. Web apps exploded in the early 2000s and then mobile apps earlier this decade. Now, APIs are following suit.

    The organizations that master this wave of API development, and deliver compelling experiences via those APIs, will drive a decade of competitive advantage. And as with all new emerging trends, the key to success is part people and process — hiring for API skills and adopting an API-first development practice — and part technology that provides a secure, fast, and scalable API infrastructure.
  • APIs provide access to data, software, and applications to enable the business to run and applications to provide a better user experience and customer experience. 
  • Our platform is a publisher of hundreds of business banking API endpoints as well as a heavy consumer of other industry and utility APIs ourselves. It’s safe to say that without APIs there would be no platform. We’re in the business of connecting businesses and corporate clients with banks, which is only possible over rich API data exchange. The problem is — every bank today has its own way of offering services to clients, so we had to create that layer to enables the delivery of services to many, in many ways.

    In our case — connecting any business application to any bank without the hustle of re-implementing the whole product for every bank and client combination. The current state for many is a file-based exchange with banks over a variety of protocols and formats, which in most cases means a custom project for each client to establish a connection between their ERP and bank services or data. 
  • We use APIs to request permissioned data and perform transactional capabilities. 
  • As the Data-as-a-Service company, we use APIs in two directions: first, for obtaining data from vendors such as Google and Amazon, and second — supplying it to our customers, who, for the most part, represent the Marketing Technology industry.

Internal and External Development

  • Primarily mobile development and quite a bit of web. There would be no apps or websites without APIs. Integration is a big issue.
  • We have a cloud-based integration platform that makes extensive use of APIs. Integration historically started with files, then moved to databases and service buses and now rely heavily on APIs. Many modern software applications provide a set of published REST APIs. These provide the perfect integration point as they provide both the transport (HTTPS) and data format (JSON), while the past with files, integration required two distinct sets of technology, a file transport, and a file format. Since many modern software applications publish their APIs, we have built connectors to these applications. The connectors encapsulate the API definition into an easy-to-use package on our integration platform.

    A user wishing to build an integration can select two connectors (for example Salesforce and Microsoft Dynamics Finance and Operations) and easily map customer information from Salesforce to Dynamics Finance. For users wishing to connect to APIs that we do not have a connector available, they can set up their own definitions and have our integration platform connect to any API. In addition to these API consumer use cases, we also have API provider capability in our integration platform. This allows a user to define a set of APIs on our platform (for example, to check order status) that they can then make public.

    This published API could then be connected to a backend system such as an ERP system. When a customer used the public-facing API published on our integration platform, the API could be connected to the backend ERP including translation and security to provide an update on their order.
  • We generally don't use the provided kits by the main providers but rather an HTTP library available in our programming language. For us, it's "requests" in Python. We then read the API documentation for each service and implement the calls needed to make it work. We have developed internal libraries to handle our main usage for our frequent library, which includes Stripe, Mailgun, and Intercom.
  • All the products and services we provide can be accessed via APIs. While we understand the value of web applications to convey meaning in a visual way or to engage with a broad audience, it is in our DNA to first and foremost engage with developers — and we do that by providing them with carefully built and documented APIs to interface their own products with ours. 
  • APIs power every piece of our product ecosystem. We have internal APIs that our web and desktop applications use, as well as public-facing APIs that our customers use to automate and customize their workflows. Beyond our own APIs, we integrate against a number of third-party APIs — from version control systems like GitHub to API gateways like Axway. 
  • We use APIs in two high-level categories: Internal and public APIs for our product offering and front-end applications are backed by a powerful set of APIs. We follow a three-layer API design pattern, including 1) Experience APIs used by our front end to give the experience to our customers. 2) Process APIs where our processing and business logic resides. 3) Systems APIs that use our datastores to access the data.  API for our internal IT projects to integrate the systems we use. For internal IT projects, we use the same design pattern as above. However, for these projects, we build the pattern using our own integration and our API manager product, both of which are part of our platform.

Microservices

  • In the last 10 years, APIs have become the universal language to integrate applications.  Monolithic applications kill innovation, it's too slow to integrate new things via the ESB. It's important to have: 1) distributed integration; 2) container-friendly solutions; and, 3) APIs are key to integration. Customers are moving away from choosing point solutions to solve one problem at a time looking for full-service pre-integrated solutions that work. 
  • Enterprises have built monolithic applications with a lot of APIs in one binary. When microservices come along, these are cut into small pieces. Instead of 50 APIs, you will have five. With serverless, it's an API call and then running a piece of code. One API code to call one function. Find the smallest unit of compute shared across monolithic, microservices, and serverless – that’s the APIs. 
  • There are two broad ways: we build (and share and consume) differentiating APIs related to our core business, and for everything non-core, we prefer API-first SaaS vendors. In our domain, we maintain an ecosystem of almost 500 microservices that comprise our mass customization platform. These microservices are used by our portfolio businesses — and third-parties fulfilling orders on our behalf – for everything from artwork preparation to shop-floor optimization to shipping rate calculations.

    We’re big believers in providing composable building blocks that our businesses can use to solve problems in novel ways, and we look for the same discipline when buying solutions outside of our domain.  Traditional vendors with monolithic solutions expect you to conform to their platform; API-first vendors allow you to integrate their functionality into yours.

Other

  • Support different types of file services. We use them to manage the content lifecycle, write from ingest to inter archiving, data governance, and so forth. Every platform has to expose APIs to build custom applications and integrations. Within the platform itself, there are many services that use APIs.

Here's who shared their insights:

How AI Can Help Redesign the Employee Experience

Will robots replace human taskforce? What is the role of artificial intelligence in relation to the employee of tomorrow? There are so many advantages that one gets carried away and does not stop to think about the shortcomings. Dealing with the overwhelming growth in employee data is the major factor to consider when one thinks about Employee Experience (EE).

See the view from the AI perspective for a moment

There are inherent advantages obvious to many but worth mentioning anyway. Though the onus is on mimicking human behavior, it follows the same grading mechanism based on the complexity of the task. But, since it decreases hiring biases, HR departments love it since the decision making at the initial level is out of their hands.Image title

Variables in Mule 3

Variables are used to store values for use within a Mule flow in a Mule application. Variables can store a current message, current message payload, or current message attributes.

In Mule 3, there are 3 types of variables:

The ‘= NULL’ Mistake and Other SQL NULL Heresies

The SQL Prompt Best Practice rule checks whether a comparison or expression includes a NULL literal ('NULL'), which in SQL Server, rather than result in an error, will simply always produce a NULL result. Phil Factor explains how to avoid this, and other SQL NULL-related calamities.

SQL Prompt has a code analysis rule (BP011) that checks whether a comparison or expression includes a NULL literal ('NULL'). These will always produce a NULL result. To determine whether a datatype is or isn’t NULL, use ISNULLor ISNOTNULL.

5 Takeaways From the 2019 State of Testing Report

The State of Testing report’s 6th edition is now live, and it is packed with interesting trends and insights from the software testing community.

The State of Testing is the largest testing survey worldwide, with over 1,000 participants from over 80 countries. Created by PractiTest and Tea Time with Testers, the report aims to shed light on the most important trends in the software testing community, and grant testers the ability to better understand their professional status relative to other testers and companies worldwide.

Remote Debugging Java Applications With JDWP

Most Java developers have had to debug their applications, usually to find and fix an issue there. In many cases, the application to debug (known as the “debuggee”) is launched from within the IDE used by the developer, while the debugger is also integrated into the IDE, allowing easy inspection of the program state in a step-by-step manner. Sometimes, however, the debuggee JVM is launched from a separate command line, or by executing it on a separate host. In such scenarios, debugging necessitates launching the JVM with some options suitable for debugging, while your IDE debugger would have to connect to it. This is where JDWP (Java Debug Wire Protocol) comes into play.

What is JDWP?

In order to debug remotely executed JVMs (where the debuggee is separately launched locally or on another machine), the Java platform defines a protocol for communication between the JVM and the debugger. JDWP dictates the format of the commands sent by the debugger (e.g. to evaluate a local variable), and replies by the JVM. The exact way of transporting the packets is not specified and is up to the implementation to define transport mechanisms. What JDWP specifies is the format and layout of packets containing commands and those containing replies. Therefore it is conceptually very simple.

Migrating Spring Java Applications to Azure App Service (Part 2 – Logging and Monitoring)

As we demonstrated in Part 1 of the series, running on the cloud is not only for cool new applications following twelve-factor principles and coded to be cloud-native. In the first article, we demonstrated how to migrate a legacy Java Spring application to run on Azure App Service and how to address handling JNDI, credentials, and externalizing the configuration. In this article, we will show to enable logging, and monitoring to use Azure native capabilities. The full application example is available on Github.

Application Insights

Application Insights is an extensible Application Performance Management (APM) service helping to monitor web applications. It will automatically detect dependencies such as HTPP or JDBC calls and includes powerful analytics tools to help you diagnose performance issues. For more details, refer to Microsoft's Getting Started Guide.

Introduction to Running Android Automated Testing On AWS Device Farm


When it comes to mobile testing, two questions come in mind: "What are devices it supports and how do we test on all these devices?" It’s quite challenging to manually test on all devices and it’s not cost-effective either. Automating test cases saves of lots of time. Another challenge is running automation on actual devices.

There are multiple solutions that run automation on cloud, such as AWS Device Farm, Firebase Lab, Xamarin Test Cloud, Kobiton, Perfecto, Sauce Labs, and Experitest.

Automated Remediation for Cloud-Specific Threats

For business enterprises shifting to cloud platforms, going to the cloud offers a number of benefits and makes innovation happen faster. Working on the cloud removes barriers to innovation in many ways.

Cloud technology makes processes cheaper, easily scalable and flexible. It provides businesses flexible capacity for data storage and dissemination which is not easily done with physical data centers. Cloud technology offers massive scaling capabilities as enterprises can purchase more capacity whenever needed.

Serverless Approach to Backup and Restore EBS Volumes

Amazon Elastic Compute Cloud (EC2) instances use Elastic Block Storage (EBS) as a root volume as well as an additional data store for applications. It is necessary to select a proper EBS volume type depending upon the workload to achieve high performance and the right approach to backup EBS volumes reqularly in production environments. We need a solution to backup and restore application data from EBS volume snapshots at any point of time and we should not pay an unnecessary cost for archiving the older snapshots. This article covers choosing the right EBS volume type for your application and provides a mechanism to handle EBS snapshots using serverless technology.

EBS Volume Types

Amazon EBS provides different volume types which have different performance characteristics and cost models. We can choose the volume type based on our application requirements (the type of the workload) to achieve higher performance as well as saving overall storage cost. EBS volume is available in two different categories, SSD-backed volumes and HDD-backed volumes. SSD backed volumes are used when the workload is I/O intensive, like transactional workloads where frequent read-writes happens in an application. Its performance is rated in IOPS. HDD-backed volumes are used when an application requires continuous read and write to the disk at a cheaper rate with high throughput. Its performance is rated in throughput MiB/s.