Hello
Could I please ask for assistance? I need a file that I am able to download that holds all the streets and postal codes of the Ireland.
The Weekly Squeak — Tanmai Gopal of Hasura
I’m back after a long break!
This issue speak with Tanmai Gopal about Hasura, an open source and hosted platform that brings instant GraphQL APIs to your data.
Need some info for something I am working on
Looking for directions. Where would one go to find a downloadable excel sheet, listing the streets of Algeria city?
Weird item in MySQL database
Hi. I'm trying to write a game in PHP. Now a weird item shows up in my database.
http://www.leondegroot.nl/files/mysql_nieuw.png
"Nieuw" means "New".
Do you have any idea what this is and how to delete it?
Thanks,
Leon
Importing a CSV file with hex data.
I have a table with the following fields:
CREATE TABLE text (
drawing INT NOT NULL,
blockID INT NOT NULL,
entityID INT NOT NULL,
style INT,
txt VARCHAR(255) NOT NULL,
attrib INT);
My csv file contains the data:
19 1CB2 E49 2 CLIENT MODULAR 1C2A
19 1CB3 E4B 2 CLIENT UG - 2 MODULAR PILOT PLANT 1C2C
19 1CB4 E4C 2 100 - 500 MICRON 1C2D
19 -1 E50 2 USERNAME 1C31
19 1CBA E51 2 15.8.2020 1C32
19 1C16 E58 2 PLANT 1C39
I'm using the following SQL to import the CSV file:
LOAD DATA INFILE '/tmp/_P2.8Q9nJ4_/text' INTO TABLE text (drawing,@blockID,@entityID,style,txt,@attrib) SET blockID=UNHEX(@blockID),entityID=UNHEX(@entityID),attrib=UNHEX(@attrib);
But this is the result:
+---------+---------+----------+-------+-------------------------------------------+--------+
| drawing | blockID | entityID | style | txt | attrib |
+---------+---------+----------+-------+-------------------------------------------+--------+
| 19 | 0 | 0 | 0 | CLIENT MODULAR | 0 |
| 19 | 0 | 0 | 0 | CLIENT UG - 2 MODULAR PILOT PLANT | 0 |
| 19 | 0 | 0 | 2 | 100 - 500 MICRON | 0 |
| 19 | 0 | 0 | 2 | USERNAME | 0 |
| 19 | 0 | 0 | 2 | PLANT | 0 |
| 19 | 0 | 0 | 2 | 15.8.2020 | 0 |
What is the correct way to import a CSV file into my table?
Using PostgreSQL pgoutput Plugin for Change Data Capture With Debezium
Change Data Capture Architecture Using Debezium, Postgres, and Kafka
was a tutorial on how to use Debezium for change data capture from Azure PostgreSQL and send them to Azure Event Hubs for Kafka - it used the wal2json
output plugin.
What About the pgoutput
Plugin?
This blog will provide a quick walk through of how to pgoutput
plugin. I will not be repeating a lot of details and use containerized versions (using Docker Compose) of Kafka connect, Kafka (and Zookeeper) to keep things simple. So, the only thing you need is Azure PostgreSQL, which you can setup using a variety of options including, the Azure Portal, Azure CLI, Azure PowerShell, ARM template.
MYSQL 8 Query Join Problem
I have the following query which has died after running over an hour on my local mysql 8 server:
UPDATE download
LEFT JOIN lookup ON download.ip_address between lookup.start_ip AND lookup.end_ip
SET ref = (
SELECT lookup.id FROM lookup WHERE download.ip_address between lookup.start_ip AND lookup.end_ip);
All ip fields are IPV4 and stored as unsigned integers and are indexed. The lookup table is approx. 3M rows, relating ip ranges to country, area, and city. The download table is approx. 2K rows. What I'm trying to do is get the id from the lookup table row that has the ip range that the download ip_address falls into.
Does someone see a problem with the query?
Concise Guide to Data Migration
Migrating data is a challenging but very important process: it’s a fundamental component of upgrading or consolidating servers, conducting server maintenance, relocating data to a data center, adding data-intensive apps like data lakes and warehouses, among other important processes.
Because of the complexity of data migration and risks associated with it, such as costly downtime or corrupted or lost data, understanding the process and having a solid data migration implementation plan is critical.
Serverless Data Processing Using Azure Tools
One of the previous blogs covered some of the concepts behind how Azure Event Hubs supports multiple protocols for data exchange. In this blog, we will see it in action using an example. With the help of a sample app, you will see how to combine real-time data ingestion component with a Serverless processing layer.
The sample application has the following components:
Looking for postal code information and street info
Hi there,
Not sure if I am in the right place, but I wanted to ask where I can access a csv file for all the streets in Belgium?
Getting Started With MongoDB Using Go
As a Go enthusiast, it's great to see first class support for MongoDB in the form of an official MongoDB Go driver. We will learn how to use it by building a simple REST API for a good old CRUD-style app!
In this blog, will cover the following topics:
Data Pipeline Using MongoDB and Kafka Connect on Kubernetes
In Kafka Connect on Kubernetes, the easy way!, I had demonstrated Kafka Connect
on Kubernetes
using Strimzi
along with the File source and sink connector. This blog will showcase how to build a simple data pipeline with MongoDB and Kafka with the MongoDB Kafka connectors, which will be deployed on Kubernetes with Strimzi
.
I will be using the following Azure services:
Change Data Capture Architecture Using Debezium, Postgres, and Kafka
Change Data Capture (CDC) is a technique used to track row-level changes in database tables in response to create, update and delete operations. Different databases use different techniques to expose these change data events - for example, logical decoding in PostgreSQL, MySQL binary log (binlog) etc. This is a powerful capability, but useful only if there is a way to tap into these event logs and make it available to other services which depend on that information.
Debezium does just that! It is a distributed platform that builds on top of Change Data Capture features available in different databases. It provides a set of Kafka Connect connectors which tap into row-level changes (using CDC) in database table(s) and convert them into event streams. These event streams are sent to Apache Kafka which is a scalable event streaming platform - a perfect fit! Once the change log events are in Kafka, they will be available to all the downstream applications.
update values with subquery returns not updatable
Hi all, I have a question about updating two values in a sql table using a subquery. My database is called books and the below table books too:
id title author location_id
11 Tera Netrear 4
14 Disser Bubu 1
15 Tenno Puoio 4
16 Oban Tyrone 5
18 Sisnobi Teru 3
Location_id is indeed another table as it's a foreign key in this table but that shouldn't matter. So what I want to do is to change all the location_id currently at 4 to 2. I know i could do it like this
update books set location_id = 1 where location_id = 4
but I'd like to know if I could do it with a subquery, something like update (SELECT * from books where location_id = 4) as thebook set thebook.location_id = 2
but when I run this query it tells me that the target table thebook UPDATE is not updatable
. Why is that and can this be achieved - I don't really mind if it's not performant etc, I think it should be possible but I can't quite get the right way to do it...
So, the idea is, make a selection first then change a/more value/s on that selection, this is the concept I'm try to prove.
Any idea?
API not accepting input
Hello. I am making a database using SQLite. Then I need to make an API with functions that returns information about that database. Then I need to print out a report. However, when I try to return an answer to a query using inputs, nothing is returned. I either get an empty array or I get "<sqlite3.cursor object="" at="" 0x034d2d20="">". If I put default values in the query and not inputs, values are returned. Does anyone have any idea's on how I can fix this please (see below for more information)? Thank you in advance!
Here is the code that I have so far:
This is the code that creates the database....
import sqlite3
with sqlite3.connect('homework3.sq') as db:
cursor = db.cursor()
cursor.execute(""" CREATE TABLE IF NOT EXISTS COURSES(
course_id INTEGER PRIMARY KEY,
department varChar(100) NOT NULL,
course_number varChar(100) NOT NULL,
course_name varChar(100) NOT NULL,
semester varChar(100) NOT NULL,
sem_year varChar(20) NOT NULL,
grade varChar(1) Not NULL)""")
cursor.execute("""CREATE TABLE IF NOT EXISTS PREREQUISITE(
course_id INTEGER PRIMARY KEY,
prereq1 Numeric(4) NOT NULL,
prereq2 Numeric(4) NOT NULL,
ID REFERENCES COURSES("course_id")
)""")
cursor.execute("""INSERT INTO COURSES (department, course_number, course_name, semester, sem_year, grade) VALUES
/*Sample Kpop Boy Bands- Present*/
('MATH', '1190', 'Calculus I', 'Summer', '2018', 'A'),
('CSE', '1322', 'Programming and Problem Solving', 'Fall', '2018', 'B'),
('CSE', '1322L', 'Programming and Problem Solving Lab', 'Fall', '2018', 'A'),
('CS', '3305', 'Data Structures', 'Spring', '2019', 'A'),
('CS', '3503', 'Computer Organization and Architecture', 'Spring', '2019', 'A'),
('MATH','2202', 'Calculus II', 'Spring', '2019', 'B'),
('MATH', '2345', 'Discrete Mathematics', 'Fall', '2018', 'A'),
('CS', '3410', 'Introduction to Database Systems', 'Spring', '2020', 'A'),
('SWE', '3313', 'Introduction to Software Engineering', 'Spring', '2020', 'A'),
('CSE', '3801', 'Professional Practices and Ethics', 'Spring', '2020', 'A'),
('CS', '3502', 'Operating Systems', 'Fall', '2020', 'B'),
('CS', '4720', 'Internet Programming', 'Fall', '2020', 'A');""")
cursor.execute("""INSERT INTO PREREQUISITE(prereq1,prereq2) VALUES
(0,0),
(0,0),
(0,0),
(2,3),
(2,3),
(1,0),
(0,0),
(2,3),
(2,3),
(2,3),
(4,5),
(4,8)""")
db.commit()
This code is in the API file. This code works fine:
def infoA():
return cursor.execute("SELECT * FROM COURSES WHERE sem_year = 2018 AND semester = 'Fall'")
But if I try to add parameters so that the function can accept inputs (as shown below), I don't get an answer:
def infoB(year, sem):
return cursor.execute("SELECT * FROM COURSES WHERE sem_year = ? AND semester = ?", [year, sem])
Here is the report file also:
import API
# print line to dive results
print("=" * 111)
template = "|{:<5} | {:15} | {:15}| {:40} | {:10}|{:5}| {:>5}|" # Writes the instructions on how to format column title
# writes the first column of the report by writing that column one is
row = template.format("ID", "Department", "Course Number", "Course Name", "Semester", "Year", "Grade")
print(row)
# print line to dive results
print("=" * 111)
# Gets function from API. When I run this function I get this: "[]"
info = API.infoB(2018, 'FALL')
# If I run this, it will return information
# info = API.infoA()
# If I run this using infoB, I get "<sqlite3.Cursor object at 0x034D2D20>"
# print(infoB.fetchall())
for data in info:
id = data[0]
depart = data[1]
num = data[2]
name = data[3]
sem = data[4]
class_year = data[5]
class_grade = data[6]
print(template.format(id, depart, num, name, sem, class_year, class_grade))
# print line to dive results
print("=" * 111)
Sending IoT Data From Arduino to Questdb
IoT on QuestDB
As you can well imagine, I've been super busy in my hew job at QuestDB so this has taken me longer than I would have liked, but here we are. If you know me at all, you know that one of the first things I always do with new things is, well, connect things to them! So I immediately went to connect an IoT device to QuestDB to see how it went. Unsurprisingly, it went quite well. So here's how it went, from start to finish.
The Database Part
The first thing I had to do was to get QuestDB up and running. Luckily, this is very straightforward. I guess I could have gone the Docker route, but as you're probably aware, I'm not a huge fan of Docker (in no small part due to the fact that it will literally suck the life out of a macOS laptop). There's also (for you MacOS users) `brew install questdb` but since I work here, and I wanted to test out the latest and greatest web console, I decided to build from source:
Deleting Lost Transactions in MS SQL Server (Part 3)
I need help with “before insert” Trigger
I'm trying to create a trigger that does two things, first take an ip that's in dot notation and run inet_aton on it and put the result in another field. Second, checks a lookup table to identify an ip range that the result of the first action falls into and enters the id of that row in the table. Do I have to break this down to separate triggers, a before insert and an after insert? This is a mysql database.
DELIMITER //
CREATE TRIGGER download_ins BEFORE INSERT ON download
FOR EACH ROW begin
declare vip varbinary(16);
select `ADDRESS` from `download`;
vip = inet_aton(`ADDRESS`);
set NEW.ip_address = vip;
declare vrefer int(25) unsigned;
select id
from `ip_lookup`
where NEW.ip_address between ip_lookup.start and ip_lookup.end
limit 1;
if vrefer is not null then
set NEW.refer = vrefer;
else
exit;
end if;
END; //
DELIMITER ;
Thanks in advance