Global Deadlock Resolution in GBase 8c Transactions and Locks

GBase 8c database features mechanisms for deadlock detection and automatic resolution. It comprises multiple CNs (Coordinating Nodes) and DN (Data Nodes). Deadlocks can occur within a single CN or DN, or across multiple CNs or DNs. Deadlocks occurring across multiple CNs or DNs are termed global deadlocks, where processes across multiple databases in the cluster cyclically wait for resources. This article primarily discusses distributed global deadlock resolution.

Screenshot_2024-06-24_at_18.18.24.png

As depicted in the figure above, at time T1, Transaction 1 begins (begin), at T2, Transaction 1 updates (update) the t column for id=1, while Transaction 2 begins. At T3, Transaction 2 updates the t column for id=4. Subsequently, at T4, Transaction 1 attempts to update the t value for id=4, and Transaction 2 attempts to update id=1's t value, resulting in mutual waiting and thus a global deadlock.

Global deadlock detection algorithms mainly fall into two categories: centralized and distributed:

1. Centralized: The GTM node (Global Transaction Manager) collects transaction lock wait information from other nodes in the cluster to construct a global wait-for graph. It then queries for deadlock cycles (using algorithms like depth-first search or topological sorting) and issues commands to terminate transactions involved in deadlocks. This approach can overload the GTM node, potentially becoming a cluster performance bottleneck. Moreover, if the GTM node encounters issues, deadlock detection becomes ineffective, making this approach less recommended.

2. Distributed: (Currently used in GBase 8c) Each CN initiates deadlock detection independently. Detection messages propagate along the wait-for relationships among transaction processing threads across nodes. If a transaction processing thread receives its own detection message, it indicates a global deadlock, prompting the transaction to rollback and resolve the deadlock.

Screenshot_2024-06-24_at_18.18.24.png

Example:

When Transaction 1 detects that data it wishes to update is locked by another transaction, it sends a waiting message to the node holding the lockin this example, CN2, where Transaction 2 originated. Similarly, Transaction 2, upon finding that the data it needs is locked by Transaction 1, sends a waiting message to CN1, where Transaction 1 is running. The transactions wait for a predetermined timeout period. If the waiting cycle is detected by either node within this period, the node initiating the detection exits its transaction, thereby resolving the global deadlock.

Testing Method:

The default deadlock timeout is 1 second, modified to 20 seconds:

show deadlock_timeout ;
alter system set deadlock_timeout=20

Create Test Table

create table test(id int,info text);
insert into test values(1,'Tom');
insert into test values(2,'Lane');

session1, Execute Update

begin;
update test set info = 'test' where id = 1;

session2, Execute Update

begin;
update test set info = 'test' where id = 2;

session1, Execute Update

update test set info = 'test' where id = 2;  --stuck

session2, Execute Update

update test set info = 'test' where id = 1;  --stuck

After 20 seconds, one session's transaction detects and terminates the deadlock, while the other session successfully commits.

Migration from GaussDB to GBase 8a: Clients, Third-Party Tools, and APIs

1. Comparison Between GSQL and GCCLI Clients

GSQL is a database connection tool provided by GaussDB 200 for running commands in the command line. This tool allows you to connect to the server and perform operations and maintenance. Its functionalities include: connecting to the database, executing SQL statements, executing meta-commands, and directly setting and using variables.

GBase 8a MPPs GCCLI supports connecting to the database and executing SQL but does not support directly setting and using variables.

Variables set in GaussDB must be executed in stored procedures when using GBase 8a data:

gsql -h 10.180.123.163 -d postgres -U jack -p 25308 
\set foo 'HR.areaS' 
select * from :foo; 
area_id | area_name 
---------+------------------------ 
4 | Middle East and Africa 
3 | Asia 
1 | Europe 
2 | Americas 
(4 rows)
2. Data Studio and GBase Studio

Data Studio is a GUI tool that allows you to connect to the database, execute, and debug SQL statements and stored procedures. Data Studio supports the basic features of GaussDB 200 and provides a graphical interface for database developers. The corresponding tool for GBase 8a MPP is GBase Studio.

3. Migration of Other Third-Party Tools and Application Interfaces

Similar to GaussDB, GBase 8a supports the use of standard database interface APIs such as JDBC, ODBC, Perl DBI, and Python DBI. For detailed information on the installation and configuration of specific interfaces, refer to the relevant API interface manuals provided by GBase 8a.

If third-party tools or programs that use the aforementioned API interfaces to access GaussDB are to be migrated to GBase 8a database, this can be done by conventionally configuring the connection driver.

Common third-party tools supported by GBase 8a include:

  • ETL Tools: Kettle, Informatica
  • BI Tools: Cognos, Tableau
  • Analytical Tools: SAS, SPSS

For detailed information on integrating specific tools, refer to the integration manuals provided by GBase 8a.

Mariadb10.11.6 Glarea single failed node startup stuck failure

Basic background information
Mariadb Ver 15.1 District 10.11.6 MariaDB Glarea cluster, one with three nodes:
Node1:192.168.18.78
Node2: 192.168.18.79
Node3: 192.168.18.80

Among them, Node1 node was restarted after a power outage of 1 hour, and after executing the system ctl start mariadb, it was stuck for a long time (running for 6 hours) but still did not recover.

The configuration information of Glarea is as follows:

[mysqld]
event_scheduler=ON
bind-address=0.0.0.0

# Galera 
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so

# Galera 
wsrep_cluster_name="hy_galera_cluster"
wsrep_cluster_address="gcomm://192.168.18.78,192.168.18.79,192.168.18.80"

# Galera 
wsrep_node_address="192.168.18.78"
wsrep_node_name="data-server"

# SST 
wsrep_sst_method=rsync

# InnoDB Configuration
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
binlog_format=ROW

The log input situation is as follows:

240403 05:05:09 mysqld_safe Starting mariadbd daemon with databases from /var/lib/mysql
240403 05:05:09 mysqld_safe WSREP: Running position recovery with --disable-log-error  --pid-file='/var/lib/mysql/data-server-recover.pid'
240403 05:05:09 mysqld_safe WSREP: Recovered position 20c1183c-e5c5-11ee-9129-97e9406cb3f8:7183126
2024-04-03  5:05:10 0 [Note] Starting MariaDB 10.11.6-MariaDB source revision fecd78b83785d5ae96f2c6ff340375be803cd299 as process 233407
2024-04-03  5:05:10 0 [Note] WSREP: Loading provider /usr/lib64/galera/libgalera_smm.so initial position: 20c1183c-e5c5-11ee-9129-97e9406cb3f8:7183126
2024-04-03  5:05:10 0 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
2024-04-03  5:05:10 0 [Note] WSREP: wsrep_load(): Galera 26.4.16(rXXXX) by Codership Oy <info@codership.com> loaded successfully.
2024-04-03  5:05:10 0 [Note] WSREP: Initializing allowlist service v1
2024-04-03  5:05:10 0 [Note] WSREP: CRC-32C: using 64-bit x86 acceleration.
2024-04-03  5:05:10 0 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1, safe_to_bootstrap: 0
2024-04-03  5:05:10 0 [Note] WSREP: GCache DEBUG: opened preamble:
Version: 2
UUID: 20c1183c-e5c5-11ee-9129-97e9406cb3f8
Seqno: -1 - -1
Offset: -1
Synced: 0
2024-04-03  5:05:10 0 [Note] WSREP: Recovering GCache ring buffer: version: 2, UUID: 20c1183c-e5c5-11ee-9129-97e9406cb3f8, offset: -1
2024-04-03  5:05:10 0 [Note] WSREP: GCache::RingBuffer initial scan...  0.0% (        0/134217752 bytes) complete.
2024-04-03  5:05:10 0 [Note] WSREP: GCache::RingBuffer initial scan...100.0% (134217752/134217752 bytes) complete.
2024-04-03  5:05:10 0 [Note] WSREP: Recovering GCache ring buffer: Recovery failed, need to do full reset.
2024-04-03  5:05:10 0 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 192.168.18.78; base_port = 4567; cert.log_conflicts = no; cert.optimistic_pa = yes; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.keep_plaintext_size = 128M; gcache.mem_size = 0; gcache.name = galera.cache; gcache.page_size = 128M; gcache.recover = yes; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.fc_single_primary = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0
2024-04-03  5:05:10 0 [Note] WSREP: Start replication
2024-04-03  5:05:10 0 [Note] WSREP: Connecting with bootstrap option: 0
2024-04-03  5:05:10 0 [Note] WSREP: Setting GCS initial position to 00000000-0000-0000-0000-000000000000:-1
2024-04-03  5:05:10 0 [Note] WSREP: protonet asio version 0
2024-04-03  5:05:10 0 [Note] WSREP: Using CRC-32C for message checksums.
2024-04-03  5:05:10 0 [Note] WSREP: backend: asio
2024-04-03  5:05:10 0 [Note] WSREP: gcomm thread scheduling priority set to other:0 
2024-04-03  5:05:10 0 [Note] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
2024-04-03  5:05:10 0 [Note] WSREP: restore pc from disk failed
2024-04-03  5:05:10 0 [Note] WSREP: GMCast version 0
2024-04-03  5:05:10 0 [Note] WSREP: (b0bc65f1-8af3, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2024-04-03  5:05:10 0 [Note] WSREP: (b0bc65f1-8af3, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2024-04-03  5:05:10 0 [Note] WSREP: EVS version 1
2024-04-03  5:05:10 0 [Note] WSREP: gcomm: connecting to group 'hy_galera_cluster', peer '192.168.18.78:,192.168.18.79:,192.168.18.80:'
2024-04-03  5:05:10 0 [Note] WSREP: (b0bc65f1-8af3, 'tcp://0.0.0.0:4567') Found matching local endpoint for a connection, blacklisting address tcp://192.168.18.78:4567
2024-04-03  5:05:10 0 [Note] WSREP: (b0bc65f1-8af3, 'tcp://0.0.0.0:4567') connection established to e1facb37-96cc tcp://192.168.18.80:4567
2024-04-03  5:05:10 0 [Note] WSREP: (b0bc65f1-8af3, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
2024-04-03  5:05:10 0 [Note] WSREP: (b0bc65f1-8af3, 'tcp://0.0.0.0:4567') connection established to e8ab0109-98a4 tcp://192.168.18.79:4567
2024-04-03  5:05:10 0 [Note] WSREP: EVS version upgrade 0 -> 1
2024-04-03  5:05:10 0 [Note] WSREP: declaring e1facb37-96cc at tcp://192.168.18.80:4567 stable
2024-04-03  5:05:10 0 [Note] WSREP: declaring e8ab0109-98a4 at tcp://192.168.18.79:4567 stable
2024-04-03  5:05:10 0 [Note] WSREP: PC protocol upgrade 0 -> 1
2024-04-03  5:05:10 0 [Note] WSREP: Node e1facb37-96cc state prim
2024-04-03  5:05:10 0 [Note] WSREP: view(view_id(PRIM,b0bc65f1-8af3,46) memb {
    b0bc65f1-8af3,0
    e1facb37-96cc,0
    e8ab0109-98a4,0
} joined {
} left {
} partitioned {
})
2024-04-03  5:05:10 0 [Note] WSREP: save pc into disk
2024-04-03  5:05:10 0 [Note] WSREP: gcomm: connected
2024-04-03  5:05:10 0 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2024-04-03  5:05:10 0 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2024-04-03  5:05:10 0 [Note] WSREP: Opened channel 'hy_galera_cluster'
2024-04-03  5:05:10 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 3
2024-04-03  5:05:10 0 [Note] WSREP: STATE_EXCHANGE: sent state UUID: b108e94c-f134-11ee-ac13-321fb976ab0c
2024-04-03  5:05:10 1 [Note] WSREP: Starting rollbacker thread 1
2024-04-03  5:05:10 2 [Note] WSREP: Starting applier thread 2
2024-04-03  5:05:10 0 [Note] WSREP: STATE EXCHANGE: sent state msg: b108e94c-f134-11ee-ac13-321fb976ab0c
2024-04-03  5:05:10 0 [Note] WSREP: STATE EXCHANGE: got state msg: b108e94c-f134-11ee-ac13-321fb976ab0c from 0 (data-server)
2024-04-03  5:05:10 0 [Note] WSREP: STATE EXCHANGE: got state msg: b108e94c-f134-11ee-ac13-321fb976ab0c from 1 (web02-server)
2024-04-03  5:05:10 0 [Note] WSREP: STATE EXCHANGE: got state msg: b108e94c-f134-11ee-ac13-321fb976ab0c from 2 (web01-server)
2024-04-03  5:05:10 0 [Note] WSREP: Quorum results:
    version    = 6,
    component  = PRIMARY,
    conf_id    = 44,
    members    = 2/3 (joined/total),
    act_id     = 7339907,
    last_appl. = 7339849,
    protocols  = 2/10/4 (gcs/repl/appl),
    vote policy= 0,
    group UUID = 20c1183c-e5c5-11ee-9129-97e9406cb3f8
2024-04-03  5:05:10 0 [Note] WSREP: Flow-control interval: [28, 28]
2024-04-03  5:05:10 0 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 7339908)
2024-04-03  5:05:10 2 [Note] WSREP: ####### processing CC 7339908, local, ordered
2024-04-03  5:05:10 2 [Note] WSREP: Process first view: 20c1183c-e5c5-11ee-9129-97e9406cb3f8 my uuid: b0bc65f1-f134-11ee-8af3-66b2cec80bb4
2024-04-03  5:05:10 2 [Note] WSREP: Server data-server connected to cluster at position 20c1183c-e5c5-11ee-9129-97e9406cb3f8:7339908 with ID b0bc65f1-f134-11ee-8af3-66b2cec80bb4
2024-04-03  5:05:10 2 [Note] WSREP: Server status change disconnected -> connected
2024-04-03  5:05:10 2 [Note] WSREP: ####### My UUID: b0bc65f1-f134-11ee-8af3-66b2cec80bb4
2024-04-03  5:05:10 2 [Note] WSREP: Cert index reset to 00000000-0000-0000-0000-000000000000:-1 (proto: 10), state transfer needed: yes
2024-04-03  5:05:10 0 [Note] WSREP: Service thread queue flushed.
2024-04-03  5:05:10 2 [Note] WSREP: ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:-1, protocol version: -1
2024-04-03  5:05:10 2 [Note] WSREP: State transfer required: 
    Group state: 20c1183c-e5c5-11ee-9129-97e9406cb3f8:7339908
    Local state: 00000000-0000-0000-0000-000000000000:-1
2024-04-03  5:05:10 2 [Note] WSREP: Server status change connected -> joiner
2024-04-03  5:05:10 0 [Note] WSREP: Joiner monitor thread started to monitor
2024-04-03  5:05:10 0 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address '192.168.18.78' --datadir '/var/lib/mysql/' --parent 233407 --progress 0 --mysqld-args --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mariadb/plugin --user=mysql --wsrep_on=ON --wsrep_provider=/usr/lib64/galera/libgalera_smm.so --log-error=/data/log/mariadb/mariadb.log --pid-file=/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock --wsrep_start_position=20c1183c-e5c5-11ee-9129-97e9406cb3f8:7183126'
WSREP_SST: [INFO] rsync SST started on joiner (20240403 05:05:10.645)
2024-04-03  5:05:11 2 [Note] WSREP: ####### IST uuid:00000000-0000-0000-0000-000000000000 f: 0, l: 7339908, STRv: 3
2024-04-03  5:05:11 2 [Note] WSREP: IST receiver addr using tcp://192.168.18.78:4568
2024-04-03  5:05:11 2 [Note] WSREP: Prepared IST receiver for 0-7339908, listening at: tcp://192.168.18.78:4568
2024-04-03  5:05:11 0 [Note] WSREP: Member 0.0 (data-server) requested state transfer from '*any*'. Selected 1.0 (web02-server)(SYNCED) as donor.
2024-04-03  5:05:11 0 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 7339908)
2024-04-03  5:05:11 2 [Note] WSREP: Requesting state transfer: success, donor: 1
2024-04-03  5:05:11 2 [Note] WSREP: Resetting GCache seqno map due to different histories.
2024-04-03  5:05:11 2 [Note] WSREP: GCache history reset: 20c1183c-e5c5-11ee-9129-97e9406cb3f8:0 -> 20c1183c-e5c5-11ee-9129-97e9406cb3f8:7339908
2024-04-03  5:05:13 0 [Note] WSREP: (b0bc65f1-8af3, 'tcp://0.0.0.0:4567') turning message relay requesting off

Improve HAVING BY performance

What are some ways to improve MySQL performance on queries against large tables that include the HAVING BY clause. It's my understanding that anything in there doesn't benefit from table indexes.

i don´t understand this syntax error when using OpenCV

This is the error message:

%Run -c $EDITOR_CONTENT
Traceback (most recent call last):
File "<string>", line 6
image = cv2.imread(C:\Users\Audun Nilsen\Pictures\pica.webp)
^
SyntaxError: invalid character in identifier

This is the code:

import cv2 # OpenCV for image processing

image = cv2.imread(C:\Users\Audun Nilsen\Pictures\pica.webp)

Problem Returning MySQL Stored Procedure results using Python

Hey Gang!

OK today I am having trouble with my transaction processing application implemented in python/MySQL. Here is some "working" testing code.

import psycopg2
from psycopg2 import Error
import binascii
from binascii import unhexlify
import mysql.connector as mysql


sql='''CREATE PROCEDURE testprocedure(OUT tacos INT)
BEGIN

 show tables;
 SET tacos := 1 ;
END'''

blank_connection_string = {
                'user': 'someusername',
                'password': 'daniwebisthebest',
                'host': 'localhost',
                'database': 'fundatabase',
                'raise_on_warnings': True }


connection=mysql.connect(**blank_connection_string, autocommit=True)
cursor=connection.cursor()


cursor.execute(sql)

cursor.callproc("testprocedure", (0, ) )
for r in cursor.stored_results():
        print(r)
        results=r.fetchall()
        print(results)
        for result in results:
                print(result)

cursor.execute("drop procedure testprocedure")

cursor.close( )
connection.close()

So it works in that everything that I can fit into a stored procedure certainly executes. But I'm getting nonsense as far as returning the variable "tacos" into any format that python can access. To wit, here is what I'm getting for script output:

MySQLCursorBuffered: (a result of CALL testprocedure(@_testpr..
[('sometable1',), ('funtablename',), ('othertable',)]
('sometable1',)
('funtablename',)
('othertable',)

In other words it seems to be doing a pretty solid job of returning the results that I DON'T need sent back to me, but is not returning the result of the actual SP that I need.

How do I resolve this?

mysql base64 decoding?

Hey! I've been asked to store some data from a client in mysql in an encoded format using python. Nothing could be easier, right?

I will later need to decode the same data using mysql exclusively. Given that constraint, I thought that base64 would be the go to since nearly every contemporary data tool that we've heard of is base64-literate.

So from flatfile it goes into the db using python no problem.

When I try to decode the data for warehousing later, the mysql from_base64() function returns a hexadecimal string instead of....ascii string? Or whatever. Human readable string.

I've found various hacks online that...should work? But I want to code this as correctly as possible because portability is so important, and it's tough to remember whatever hack you used to get it running five years ago. I'm convinced I'm doing it wrong.

Is there a better way to python-encode data that's consistently reversible using mysql for processing?

Thanks in advance!

Assigning “Usage” DB Permissions, Postgres

More programming fun!

OK this time around I'm trying to create a table with the statement,

CREATE TABLE tablename ( recnumber SERIAL  PRIMARY KEY, recordvalues VARCHAR ( 500 ) NOT NULL, datecreated DATE )

And it basically works, and I can add records using the dba account. However when I try to add records using the application db account I get the error,

permission denied for sequence _seq

A search engine claims that I need to assign "Usage" permissions, but this is turning into a rabbit hole of then needing to assign 'Usage' rights specific to something called a 'sequence.' But I feel like that will only be a dead end.

Am I better off granting . privs to the application service account? Or perhaps just running the application using the dba account? Or maybe I should switch back to Mysql?

It seems like with postgres there are a lot of hoops to jump through but that the nuances are reducing than adding to the flexibility of what I'm able to configure as a DBA. This is not a rant post and I am quite happy to switch back to Mysql if that's what everyone else is doing. No point in using Postgres if it's a step backwards.

Thanks in advance!

create a database-specific table in postgres?

How do I create a table in postgres that's associated with a specific database?

Obviously from the command line one can do

Create table tablename ();

But how do I make sure that the table is associated with the correct database?

CREATE table dbname.tablename();

is giving me the error:

ERROR:  schema "dbname" does not exist

DateTime in TextBox (Visual Basic)

My textbox datatype is DateTime.

Using the click event of a button, this code is placed following MyTableBindingSourceAddNew()

Dim CurrentDateTime As DateTime = DateTime.Now.ToString("dd.MM.yy hh:mm:ss")
Me.SaleDateTextBox.Text = CurrentDateTime

I then update table with TableAdapter.

On my form I have a dropdown combobox in a toolbar, display member being "SaleDate"
This displays DateTime, but time is always 12:00:00AM
The SaleDateTextbox.Text populates as Date.Now.

I am not getting correct date and time in both textbox and combobox.

All help appreciated.

Shane.

SQL Query Optimization: Combining Multiple Joins for Improved Performance

I'm working on an SQL query for a complex reporting system that involves multiple tables and joins. However, the query's performance is not meeting my expectations, and I suspect that the way I've structured my joins might be inefficient.

Here's a simplified version of my query:

SELECT
    orders.order_id,
    customers.customer_name,
    products.product_name,
    order_details.quantity,
    order_details.unit_price
FROM
    orders
JOIN
    customers ON orders.customer_id = customers.customer_id
JOIN
    order_details ON orders.order_id = order_details.order_id
JOIN
    products ON order_details.product_id = products.product_id
WHERE
    orders.order_date BETWEEN '2023-01-01' AND '2023-12-31';

While this query returns the correct results, it's taking a significant amount of time to execute, especially when dealing with a large dataset.

I'd like to optimize this query for better performance. Could someone review my SQL code and suggest improvements or alternative approaches to achieve the same result more efficiently? Additionally, are there any indexing strategies or database design considerations that might help enhance the query's speed? Any insights or code optimizations would be greatly appreciated. Thank you!

CCryptographic NONCE in Content Security Policy

My site is on Apache, the host has enabled 'mod_unique_id'
My CSP runs in the root .htaccess.
My host has given me 2 lines of code to put in the CSP to make an unrecognisable base64 NONCE code each time it's needed - particularly for PayPal.
What they sent (in bold):
`

<IfModule mod_headers.c>
    **Header set X-Nonce "expr=%{base64:%{reqenv:UNIQUE_ID}}"
    Header set Content-Security-Policy "expr=default-src 'self'; script-src 'self' 'nonce-%{base64:%{reqenv:UNIQUE_ID}}'"**`

The 2 lines of code go in my .htaccess somewhere, I'm pretty confident about the script-src but the bit that's throwing me is the expr=default-src: - is that a new directive?

This is the Header set Content-Security-Policy "frame-ancestors 'self' twitter.com t.co;block-all-mixed-content;default-src 'unsafe-inline' https://www.(my website).com https://www.paypal.com https://www.clarity.ms https://www.google.com https://www.paypalobjects.com;script-src 'nonce-YSBmcmllbmQgaXMgYSBwZXJzb29uIHRoYXQgd2Fsa3MgaW4gd2hlbiB0aGUgb3RoZXJzIHdhbGsgb3V0'

I need someone who knows about CSP, the directives and the workings of the NONCE to help me set it up. My regular developer can't help me, my host has no idea, nor can a reputable developer whom I call on.
Anyone? Thanks in anticipation, Steve

How do i make this complex ER Diagram

An equipment company wishes to create a database to support the hiring of tools and machinery to clients. The company has three types of equipment: power tools, such as drills and vacuum cleaners, plants such as excavators and scaffolding. Each piece of equipment is identified by a number. Power tools are described by their model and the voltage they use, whereas plants are classified by their model and their size tonnes. can be traditional, aluminium fibreglass; in addition, its width can be single or double. A large piece of equipment may be composed of smaller pieces of equipment.

The company has various outlets and each has staff including a manager and several senior technicians who are responsible for supervising the work of allocated groups of technicians. A supervision record is also kept for a specific date. All employees are identified by their number, name, date of birth (DOB) and address. Furthermore, a record is kept on their employment records and their qualifications.

Each outlet has a stock of equipment that may be hired by clients for varying periods of time, from a minimum of four hours to a maximum of six months. Each hire agreement between a client and the company is uniquely identified by using a hire number. Each client is identified by a number and a name. The company insists that each client must take out insurance cover for each equipment hire period. Each insurance cover is identified by a unique insurance number and includes the description of the insurance. The company wishes to keep a record of the member of staff who was in charge of a specific hire agreement. Each piece of equipment is checked for faults when it is returned by the client, and the faults/defects/damage recorded. The company keeps a record of the hire history of each client.

  1. Create an ER diagram for the above scenario and indicate the cardinality of relationships and the nature of the associations (mandatory or optional). You should allocate adequate attributes to the entities of interest, especially the identifiers.

SQL Extensions for Time-Series Data in QuestDB

In this tutorial, you are going to learn about QuestDB SQL extensions which prove to be very useful with time-series data. Using some sample data sets, you will learn how designated timestamps work and how to use extended SQL syntax to write queries on time-series data.

Introduction

Traditionally, SQL has been used for relational databases and data warehouses. However, in recent years there has been an exponential increase in the amount of data that connected systems produce, which has brought about a need for new ways to store and analyze such information. For this reason, time-series analytics have proved critical for making sense of real-time market data in financial services, sensor data from IoT devices, and application metrics.

Reading Data from Linux C

What is best way to get one row from a mysql table (with many columns) so I can use several several fields from that row for calculations in a C program?
After executing a succesful SELECT command using mysql_query, is there an alternative to using mysql_store_result?

Reading Data from MYSQL using Linux C

What is best way to get one row from a mysql table (with many columns) so I can use several several fields from that row for calculations in a C program?
After executing a succesful SELECT command using mysql_query, is there an alternative to using mysql_store_result?

LEFT JOIN 3 tables with where date GROUP BY and SUM

i have 3 tables : users, deposit, withdraw
table users :

id , username, referral
  1. 1, a1, null
  2. 2, a2, a1
  3. 3, a3, a2
  4. 4, a4, a1
  5. 5, a5, a2

table deposit :

id, users_id, amount, status, approve_date
  1. 1, 1, 10000.00, approve, 2022-10-01 14:52:53
  2. 2, 3, 10000.00, approve, 2022-10-01 14:52:53
  3. 3, 3, 10000.00, approve, 2022-10-01 14:52:53
  4. 4, 3, 10000.00, approve, 2022-10-01 14:52:53
  5. 5, 5, 10000.00, approve, 2022-10-01 14:52:53

table withdraw :

id, users_id, amount, status, approve_date
  1. 1, 1, 20000.00, approve, 2022-10-01 14:52:53
  2. 2, 3, 10000.00, approve, 2022-10-01 14:52:53
  3. 3, 3, 30000.00, approve, 2022-10-01 14:52:53
  4. 4, 3, 40000.00, approve, 2022-10-01 14:52:53
  5. 5, 5, 100000.00, approve, 2022-10-01 14:52:53

I would like to have the below end result, how do I combine the 3 tables together? The end result is sorted by SUM(deposit.amount) SUM(withdraw.amount) and where by approve date and search by referral . I apologize for couldn't display table data in a nice format for easier viewing. Thank you for your help.
what is try is like this sql code :

SELECT a.`referral`, a.`id`, a.`username`, SUM(b.`amount`) AS Total,SUM(c.`amount`) AS Totals FROM users a 
LEFT JOIN  `deposit` b ON a.`id` = b.`user_id` 
LEFT JOIN  `withdraw` c ON a.`id` = c.`user_id` 
WHERE 1
AND b.`approve_date` >= '2022-10-01 00:00:00' 
AND b.`approve_date` <= '2022-11-04 23:59:59' 
AND b.`status` = 'approve'
AND c.`status` = 'approve'
AND a.`referral` = 'a1' 
GROUP BY b.user_id,c.user_id
ORDER BY a.`id` DESC 
LIMIT 500 OFFSET 0

the result i want is like this:

username, total_deposit, total_withdraw

  1. a3, 30000, 80000

7 Redis Interview Questions Job-Seekers Should Be Ready to Answer

The job youre applying for says, Redis experience required. Are you ready for the questions a hiring manager is likely to ask? Heres how to prepare for the job interview... as inspired by Lord of the Rings.

Nominally the article is written for someone getting answers ready for Redis-specific questions, but I think they'd apply for any database-related job. Especially if you have experience with one tool and the job asks for experience with another.