5 Data Models for IoT

Apache Cassandra is a rock-solid choice for managing IoT and time series data at scale. The most popular use case of storing, querying, and analyzing time series generated by IoT devices in Cassandra is well-understood and documented. In general, a time series is stored and queried based on its source IoT device. However, there exists another class of IoT applications that require quick access to the most recent data generated by a collection of IoT devices based on a known state. The question that such applications need to answer is: Which IoT devices or sensors are currently reporting a specific state? In this blog post, we focus on this question and provide five possible data modeling solutions to efficiently answer it in Cassandra.

Introduction

The Internet of Things (IoT) is generating massive amounts of time series data that needs to be stored, queried, and analyzed. Apache Cassandra is an excellent choice for this task: not only because of its speed, reliability, and scalability but also because its internal data model has built-in support for time-ordered data.

Inheritance vs. Composition in JPA

Introduction

«Don't repeat yourself» or «DRY». Developers try to adhere to this principle during software development. It helps to avoid redundant code writing and, as a result, simplifies its maintainability in the future. But how to achieve this principle in the JPA world?

There are two approaches: Inheritance and Composition. Both have their pros and cons. Let's figure out what they are on the not quite "real-world" but representative example.

Building a Modern B2B E-Commerce Tech Stack

Introduction

Most articles about building e-commerce software focus on B2C (business-to-consumer) settings. While B2C is widely written about, there are far fewer technical guides for developers in the $1.3 trillion B2B (business-to-business) e-commerce industry. B2B e-commerce is notably different from B2C retailing, and building software for this industry poses a unique set of challenges.

In this post, I’ll shed some light on building a modern, scalable B2B e-commerce platform. I’ll share some of the technical considerations you’ll make and architectural decisions you’ll encounter. Along the way, I’ll mention a few tools that will help you build B2B e-commerce software faster.

Building a Scalable E-Commerce Data Model

Introduction

If selling products online is a core part of your business, then you need to build an e-commerce data model that’s scalable, flexible, and fast. Most off-the-shelf providers like Shopify and BigCommerce are built for small stores selling a few million dollars in orders per month, so many e-commerce retailers working at scale start to investigate creating a bespoke solution.

Continue reading "Building a Scalable E-Commerce Data Model"

Dynamic SQL Injection With Oracle ERP Cloud

In this previous article, we learned how to design and Develop an Oracle Cloud BI Report. We will use the same Report and convert it Into a Dynamic SQL Injection based Report.

Log in to Oracle Cloud Applications and Go to Tools in the Navigator and click on Reports and Analytics. Click on Browse Catalog to launch the BI workspace.

Data Modeling Tools Detailed Comparison

A data modeling tool or a database modeling tool is an application that helps data modelers to create and design databases structureThus, data modeling tools make the Data modeling process easier and provide many features that help data modelers to understand their data. 

Actuallythere are many different data modeling tools available for different database platforms. This multitude of tools available makes it very difficult to choose a tool that suits the user's needs.  

5 Ways to Adapt Your Analytics Strategy to the New Normal

Covid 19 has upended all traditional business models and made years of carefully curated data and forecasting practically irrelevant. With the world on its head, consumers can’t be expected to behave the same way they did 9 months ago, and we’ve witnessed major shifts in how and where people and businesses are spending their money. This new normal— the “novel economy,” as many have dubbed it—requires business leaders to think on their feet and adjust course quickly while managing the economic impact of lockdowns, consumer fear, and continual uncertainty. The decisions they make today will affect their company’s trajectory for years to come, so it is more important than ever to be empowered to make informed business decisions.

In recent years, organizations across industries have started to implement advanced analytics programs at a record pace, drawn by the allure of increased efficiency and earnings. According to McKinsey, these technologies are expected to offer between $9.5 and $15.4 trillion in annual economic value when properly implemented. However, most organizations struggle to overcome cultural and organizational hurdles, such as adopting agile delivery methods or strong data practices. In other words, adopting advanced analytics programs is happening across the board, but successful implementation takes a long time.

Starting a Data Model With Repods

Repods is a data platform that can create and manage data pods. These pods are compact data warehouses with flexible storage, vCores, memory, and all required tooling. You can manage personal data projects, work together in a private team, or collaborate on open data in public data pods.

Before we start

Before creating a data pod, it is important to be aware of the scope of information that we have and need for our analysis. The goal is to create a data model that closely reflects the business entities of the subject area, without focusing on how reports are going to be created or how we are going to fill this data model with the given data. A good place to start is by answering the following questions:

How to Get Users’ Home Address Right

Pieter Brueghel the Younger, Paying the Tax (The Tax Collector), 1640

In my previous article, we just skimmed the surface of objects. Let's continue our reconnaissance. Today's topic is a tough one. It's not quite BIG DATA, but it's still data that's not easy to work with: we're talking about fairly large amounts of data. It won't all fit into RAM at once, and some of it won't even fit on the drive (not due to lack of space, but because there's a lot of junk). The name of our subject is FIAS DB: the Federal Information Address System database — the databases of addresses in Russia. The archive is 5.5 GB. And it's a compressed XML file. After extraction, it will be a full 53 GB (set aside 110 GB for extraction). And when you start to parse and convert it, that 110 GB won't be enough. There won't be enough RAM either.