How can I upload a specific row without updating all the rows?

So, I have made a table called supplies_table and inside the table is id, name, and files. And I also made an upload button where it can upload a pdf BLOB file into the files column. Yes, it can upload a pdf file in the files column but the problem is that when I upload a file to a specific row, the other rows also updates with the same pdf file I uploaded.

Here's my code:

<?php

session_start();

$servername = "localhost";
$username = "root";
$password = "";
$dbname = "myDatabase";

$connect = mysqli_connect($servername, $username, $password, $dbname);

if(count($_FILES) > 0)
{
    if(is_uploaded_file($_FILES['file']['tmp_name']))
    {
        $_SESSION['id'];
        $file = file_get_contents($_FILES['file']['tmp_name']);
        $sql = "UPDATE supplies_table SET files = ? WHERE id=id";
        $statement = $connect->prepare($sql);
        $statement->bind_param('s',$file);
        $current_id = $statement->execute();
    }
}
mysqli_close($connect);
?>

What can I do to update just one row in the table?

Need help with this DBMS interview Concept

Hey Peeps,
Hope all are doing well. Happy to be a part of this community. So am fairly new in the programming world and exploring a few job opportunities in this field.

Coming to my query, So was going through this list of questions, although concepts seem easy to understand here, examples could be super helpful. Especially since these are interview questions, I wanted to go in-depth to understand each concept.

Q)Explain the difference between intention and extension in a database...Would be extremely grateful if someone out here can explain this with a help of a query.

Thanks..!

I’m Overlooking something

i am running ubuntu 22.04 and php 8.1 / MariaDB , i am missing somthing here in my code just can't see it. the premise of the code is to look at the TrolleyID Field and if it's '00000' basically echo's "BAD-READ" else it's 'GOOD-READ' any help would be great . thank you

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<body>
<?php

$hostname = "localhost";
$username = "xxx";
$password = "xxx";
$db = "xxxxx";
$dbconnect=mysqli_connect($hostname,$username,$password,$db);

if ($dbconnect->connect_error) {
  die("Database connection failed: " . $dbconnect->connect_error);
}

?>

<table width="511" border="1" align="center">
<tr>
  <td width="133">Date</td>
  <td width="148">Time</td>
  <td width="208">TrolleyID</td>
  <td width="208">Status</td>
</tr>


        <?php
        while($res = mysqli_fetch_array($result)) {     

            $stat=$res['trolleyID'];
            if($stat=="000000")
            {
                $color="color:red";
                $text="BAD READ";
            }
            else 
            {
                $color="color:green";
                $text="GOOD READ";
            }

            echo "<tr>";
                echo "<td>".$res['datecol']."</td>";
                echo "<td>".$res['timecol']."</td>";
                echo "<td>".$res['trolleyID']."</td>";
                echo "<td style='$color'>".$text."</td>"; 
            echo "</tr>";

        }
        ?>

</table>
</body>
</html>

What books would you recommend for people new to Redis?

Im working on a blog post about this topic. Id love to hear your suggestions!

  • What book(s) should someone read to come up to speed?
  • Why do you recommend that one?

Note that the books dont have to be peculiar to Redis. For example, someone suggested Designing Data-Intensive Applications because Its a great general purpose guide to data storage technologies, why people use the different things, a great dose of historical context, and similar.

Or to put it another way: What book do you wish youd read before you got started?

Database application with vb.net

Good day every body. I want to develop a database program that will be used in school to input information about students and all the staff of the school. I want the program to be able to retrieve any required information and also display the total number of students in the database. Because I don't have any knowledge of SQL, I want to use Microsoft access database. Some should please assist in the complete codes.
Thank you

Time to request

Hello all,

I would like to set a time after 2 requests that someone can make a request again does anyone have an example for this.

thanks in advance

DistSQL Applications: Building a Dynamic Distributed Database

Background

Ever since the release of ShardingSphere 5.0.0, DistSQL has been providing strong dynamic management capabilities to the ShardingSphere ecosystem.

Thanks to DistSQL, users have been empowered to do the following:

  • Create logical databases online.
  • Dynamically configure rules (i.e. sharding, data encryption, read/write splitting, database discovery, shadow DB, and global rules).
  • Adjust storage resources in real-time.
  • Switch transaction types instantly.
  • Turn SQL logs on and off at any time.
  • Preview the SQL routing results.
    At the same time, in the context of increasingly diversified scenarios, more and more DistSQL features are being created and a variety of valuable syntaxes have been gaining popularity among users.
Overview

This post takes data sharding as an example to illustrate DistSQLs application scenarios and related sharding methods.

A series of DistSQL statements are concatenated through practical cases, to give you a complete set of practical DistSQL sharding management schemes, which create and maintain distributed databases through dynamic management. The following DistSQL will be used in this example:

Practical Case

Required scenarios

  • Create two sharding tables: t_order and t_order_item.
  • For both tables, database shards are carried out with the user_id field, and table shards with the order_id field.
  • The number of shards is 2 databases * 3 tables.
    As shown in the figure below:

Setting up the environment

1.Prepare a MySQL database instance for access. Create two new databases: demo_ds_0 and demo_ds_1.

Here we take MySQL as an example, but you can also use PostgreSQL or openGauss databases.

2.Deploy Apache ShardingSphere-Proxy 5.1.2 and Apache ZooKeeper. ZooKeeper acts as a governance center and stores ShardingSphere metadata information.

3.Configure server.yaml in the Proxy conf directory as follows:

mode:
  type: Cluster
  repository:
    type: ZooKeeper
    props:
      namespace: governance_ds
      server-lists: localhost:2181  # ZooKeeper address
      retryIntervalMilliseconds: 500
      timeToLiveSeconds: 60
      maxRetries: 3
      operationTimeoutMilliseconds: 500
  overwrite: false
rules:
  - !AUTHORITY
    users:
      - root@%:root

4.Start ShardingSphere-Proxy and connect it to Proxy using a client, for example:

mysql -h 127.0.0.1 -P 3307 -u root -p

Creating a distributed database

CREATE DATABASE sharding_db;
USE sharding_db;

Adding storage resources
1.Add storage resources corresponding to the prepared MySQL database.

ADD RESOURCE ds_0 (
    HOST=127.0.0.1,
    PORT=3306,
    DB=demo_ds_0,
    USER=root,
    PASSWORD=123456
), ds_1(
    HOST=127.0.0.1,
    PORT=3306,
    DB=demo_ds_1,
    USER=root,
    PASSWORD=123456
);

2.View storage resources

mysql> SHOW DATABASE RESOURCES\G;
*************************** 1. row ***************************
                           name: ds_1
                           type: MySQL
                           host: 127.0.0.1
                           port: 3306
                             db: demo_ds_1
                            -- Omit partial attributes
*************************** 2. row ***************************
                           name: ds_0
                           type: MySQL
                           host: 127.0.0.1
                           port: 3306
                             db: demo_ds_0
                            -- Omit partial attributes

Adding \G to the query statement aims to make the output format more readable, and it is not a must.

Creating sharding rules
ShardingSpheres sharding rules support regular sharding and automatic sharding. Both sharding methods have the same effect. The difference is that the configuration of automatic sharding is more concise, while regular sharding is more flexible and independent.

Please refer to the following links for more details on automatic sharding:

Intro to DistSQL-An Open Source and More Powerful SQL

AutoTable: Your Butler-Like Sharding Configuration Tool

Next, well adopt regular sharding and use the INLINE expression algorithm to implement the sharding scenarios described in the requirements.

Primary key generator

The primary key generator can generate a secure and unique primary key for a data table in a distributed scenario. For details, refer to the document Distributed Primary Key.

1.Create the primary key generator.

CREATE SHARDING KEY GENERATOR snowflake_key_generator (
TYPE(NAME=SNOWFLAKE)
);

2.Query primary key generator

mysql> SHOW SHARDING KEY GENERATORS;
+-------------------------+-----------+-------+
| name                    | type      | props |
+-------------------------+-----------+-------+
| snowflake_key_generator | snowflake | {}    |
+-------------------------+-----------+-------+
1 row in set (0.01 sec)

Sharding algorithm

1.Create a database sharding algorithm, used by t_order and t_order_item in common.

-- Modulo 2 based on user_id in database sharding
CREATE SHARDING ALGORITHM database_inline (
TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="ds_${user_id % 2}"))
);

2.Create different table shards algorithms for t_order and t_order_item.

-- Modulo 3 based on order_id in table sharding
CREATE SHARDING ALGORITHM t_order_inline (
TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="t_order_${order_id % 3}"))
);
CREATE SHARDING ALGORITHM t_order_item_inline (
TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="t_order_item_${order_id % 3}"))
);

3.Query sharding algorithm

mysql> SHOW SHARDING ALGORITHMS;
+---------------------+--------+---------------------------------------------------+
| name                | type   | props                                             |
+---------------------+--------+---------------------------------------------------+
| database_inline     | inline | algorithm-expression=ds_${user_id % 2}            |
| t_order_inline      | inline | algorithm-expression=t_order_${order_id % 3}      |
| t_order_item_inline | inline | algorithm-expression=t_order_item_${order_id % 3} |
+---------------------+--------+---------------------------------------------------+
3 rows in set (0.00 sec)

Default sharding strategy

Sharding strategy consists of sharding key and sharding algorithm. Please refer to Sharding Strategy for its concept.

Sharding strategy consists of databaseStrategy and tableStrategy.

Since t_order and t_order_item have the same database sharding field and sharding algorithm, we create a default strategy that will be used by all shard tables with no sharding strategy configured:

1.Create a default database sharding strategy

CREATE DEFAULT SHARDING DATABASE STRATEGY (
TYPE=STANDARD,SHARDING_COLUMN=user_id,SHARDING_ALGORITHM=database_inline
);

2.Query default strategy

mysql> SHOW DEFAULT SHARDING STRATEGY\G;
*************************** 1. row ***************************
                    name: TABLE
                    type: NONE
         sharding_column:
 sharding_algorithm_name:
 sharding_algorithm_type:
sharding_algorithm_props:
*************************** 2. row ***************************
                    name: DATABASE
                    type: STANDARD
         sharding_column: user_id
 sharding_algorithm_name: database_inline
 sharding_algorithm_type: inline
sharding_algorithm_props: {algorithm-expression=ds_${user_id % 2}}
2 rows in set (0.00 sec)

The default table sharding strategy is not configured, so the default strategy of TABLE is NONE.

Sharding rules

The primary key generator and sharding algorithm are both ready. Now create sharding rules.

1.t_order

CREATE SHARDING TABLE RULE t_order (
DATANODES("ds_${0..1}.t_order_${0..2}"),
TABLE_STRATEGY(TYPE=STANDARD,SHARDING_COLUMN=order_id,SHARDING_ALGORITHM=t_order_inline),
KEY_GENERATE_STRATEGY(COLUMN=order_id,KEY_GENERATOR=snowflake_key_generator)
);

DATANODES specifies the data nodes of shard tables.
TABLE_STRATEGY specifies the table strategy, among which SHARDING_ALGORITHM uses created sharding algorithm t_order_inline;
KEY_GENERATE_STRATEGY specifies the primary key generation strategy of the table. Skip this configuration if primary key generation is not required.

2.t_order_item

CREATE SHARDING TABLE RULE t_order_item (
DATANODES("ds_${0..1}.t_order_item_${0..2}"),
TABLE_STRATEGY(TYPE=STANDARD,SHARDING_COLUMN=order_id,SHARDING_ALGORITHM=t_order_item_inline),
KEY_GENERATE_STRATEGY(COLUMN=order_item_id,KEY_GENERATOR=snowflake_key_generator)
);

3.Query sharding rules

mysql> SHOW SHARDING TABLE RULES\G;
*************************** 1. row ***************************
                            table: t_order
                actual_data_nodes: ds_${0..1}.t_order_${0..2}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_${order_id % 3}
              key_generate_column: order_id
               key_generator_type: snowflake
              key_generator_props:
*************************** 2. row ***************************
                            table: t_order_item
                actual_data_nodes: ds_${0..1}.t_order_item_${0..2}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_item_${order_id % 3}
              key_generate_column: order_item_id
               key_generator_type: snowflake
              key_generator_props:
2 rows in set (0.00 sec)

So far, the sharding rules for t_order and t_order_item have been configured.

A bit complicated? Well, you can also skip the steps of creating the primary key generator, sharding algorithm, and default strategy, and complete the sharding rules in one step. Let's see how to make it easier.

Syntax
Now, if we have to add a shard table t_order_detail, we can create sharding rules as follows:

CREATE SHARDING TABLE RULE t_order_detail (
DATANODES("ds_${0..1}.t_order_detail_${0..1}"),
DATABASE_STRATEGY(TYPE=STANDARD,SHARDING_COLUMN=user_id,SHARDING_ALGORITHM(TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="ds_${user_id % 2}")))),
TABLE_STRATEGY(TYPE=STANDARD,SHARDING_COLUMN=order_id,SHARDING_ALGORITHM(TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="t_order_detail_${order_id % 3}")))),
KEY_GENERATE_STRATEGY(COLUMN=detail_id,TYPE(NAME=snowflake))
);

Note: The above statement specified database sharding strategy, table strategy, and primary key generation strategy, but it didnt use existing algorithms.

Therefore, the DistSQL engine automatically uses the input expression to create an algorithm for the sharding rules of t_order_detail. Now the primary key generator, sharding algorithm, and sharding rules are as follows:

1.Primary key generator

mysql> SHOW SHARDING KEY GENERATORS;
+--------------------------+-----------+-------+
| name                     | type      | props |
+--------------------------+-----------+-------+
| snowflake_key_generator  | snowflake | {}    |
| t_order_detail_snowflake | snowflake | {}    |
+--------------------------+-----------+-------+
2 rows in set (0.00 sec)

2.Sharding algorithm

mysql> SHOW SHARDING ALGORITHMS;
+--------------------------------+--------+-----------------------------------------------------+
| name                           | type   | props                                               |
+--------------------------------+--------+-----------------------------------------------------+
| database_inline                | inline | algorithm-expression=ds_${user_id % 2}              |
| t_order_inline                 | inline | algorithm-expression=t_order_${order_id % 3}        |
| t_order_item_inline            | inline | algorithm-expression=t_order_item_${order_id % 3}   |
| t_order_detail_database_inline | inline | algorithm-expression=ds_${user_id % 2}              |
| t_order_detail_table_inline    | inline | algorithm-expression=t_order_detail_${order_id % 3} |
+--------------------------------+--------+-----------------------------------------------------+
5 rows in set (0.00 sec)

3.Sharding rules

mysql> SHOW SHARDING TABLE RULES\G;
*************************** 1. row ***************************
                            table: t_order
                actual_data_nodes: ds_${0..1}.t_order_${0..2}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_${order_id % 3}
              key_generate_column: order_id
               key_generator_type: snowflake
              key_generator_props:
*************************** 2. row ***************************
                            table: t_order_item
                actual_data_nodes: ds_${0..1}.t_order_item_${0..2}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_item_${order_id % 3}
              key_generate_column: order_item_id
               key_generator_type: snowflake
              key_generator_props:
*************************** 3. row ***************************
                            table: t_order_detail
                actual_data_nodes: ds_${0..1}.t_order_detail_${0..1}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_detail_${order_id % 3}
              key_generate_column: detail_id
               key_generator_type: snowflake
              key_generator_props:
3 rows in set (0.01 sec)

Note: In the CREATE SHARDING TABLE RULE statement, DATABASE_STRATEGY, TABLE_STRATEGY, and KEY_GENERATE_STRATEGY can all reuse existing algorithms.

Alternatively, they can be defined quickly through syntax. The difference is that additional algorithm objects are created. Users can use it flexibly based on scenarios.

After the configuration verification rules are created, you can verify them in the following ways:

Checking node distribution

DistSQL provides SHOW SHARDING TABLE NODES for checking node distribution and users can quickly learn the distribution of shard tables.

We can see the node distribution of the shard table is consistent with what is described in the requirement.

SQL Preview

Previewing SQL is also an easy way to verify configurations. Its syntax is PREVIEW SQL:

1.Query with no shard key with all routes

2.Specify user_id to query with a single database route

3.Specify user_id and order_id with a single table route

Single-table routes scan the least shard tables and offer the highest efficiency.

DistSQL auxiliary query

During the system maintenance, algorithms or storage resources that are no longer in use may need to be released, or resources that need to be released may have been referenced and cannot be deleted. The following DistSQL can solve these problems.

Query unused resources

1.Syntax: SHOW UNUSED RESOURCES

2.Sample:

Query unused primary key generator

1.Syntax: SHOW UNUSED SHARDING KEY GENERATORS

2.Sample:

Query unused sharding algorithm

1.Syntax: SHOW UNUSED SHARDING ALGORITHMS

2.Sample:

Query rules that use the target storage resources

1.Syntax: SHOW RULES USED RESOURCE

2.Sample:

All rules that use the resource can be queried, not limited to the Sharding Rule.

Query sharding rules that use the target primary key generator

1.Syntax: SHOW SHARDING TABLE RULES USED KEY GENERATOR

2.Sample:

Query sharding rules that use the target algorithm

1.Syntax: SHOW SHARDING TABLE RULES USED ALGORITHM

2.Sample:

Conclusion

This post takes the data sharding scenario as an example to introduce DistSQLs applications and methods.

DistSQL provides flexible syntax to help simplify operations. In addition to the INLINE algorithm, DistSQL supports standard sharding, compound sharding, Hint sharding, and custom sharding algorithms. More examples will be covered in the coming future.

If you have any questions or suggestions about Apache ShardingSphere, please feel free to post them on the GitHub Issue list.

Project Links:

ShardingSphere Github

ShardingSphere Twitter

ShardingSphere Slack

Contributor Guide

GitHub Issues

Contributor Guide

References
  1. Concept-DistSQL

  2. Concept-Distributed Primary Key

  3. Concept-Sharding Strategy

  4. Concept INLINE Expression

  5. Built-in Sharding Algorithm

  6. User Manual: DistSQL

Author

Jiang Longtao
SphereEx Middleware R&D Engineer & Apache ShardingSphere Committer.

Longtao focuses on the R&D of DistSQL and related features.

How to write a program whose input various information of the companies

Hi,
I want to write a program whose input is the names of the companies and when adding the name of each company, it will take various information from that company of different types,
1- TextBox (daily production rate),
2- CheckBox (select product features),
3- OptionButton (the gender of the company owner),
4- Date (Product delivery time, yy/mm/dd), ... in a form and store all that information in the profile of that company.

I just started programming. Of course, I have an average familiarity with Matlab, which I know is not suitable for writing this program.

What programming language should I use for this task so that I can design a beautiful appearance (GUI) for entering information and this information can be entered easily by the user?

Is there a pre-written program that is close to my goal so that I can improve it to suit my needs?

Thanks in advance.

Storing ipv4 and ipv6 in MongoDB

Hello

I am building a database (i prefer MongoDB) that i will store over 100 mil ipv4 and ipv6 records for logging purposes. Data sample :

1.1.1.1 -> 0x1010101
1.2.3.4 -> 0x1020304
34.53.63.25 -> 0x22353f19
255.255.255.255 -> 0xffffffff

0001:0001:0001:0001:0001:0001:0001:0001 -> 0x10001000100010001000100010001
1111:1111:1111:1111:1111:1111:1111:1111 -> 0x11111111111111111111111111111111
2345:0425:2CA1:0000:0000:0567:5673:23b5 -> 0x234504252ca1000000000567567323b5
2345:0425:2CA1::0567:5673:23b5          -> 0x234504252ca1000000000567567323b5
ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff -> 0xffffffffffffffffffffffffffffffff

I will have a lot of queries retrieving data by IP. I don't care about space, queries must be as fast as possible.

I was thinking about storing in binary or have different 4 (for ipv4) and 8 columns (for ipv6) spitted IP parts.

What is the most efficient way in terms of speed to achieve that?

Please help with Python code

I am new here. I passionately need the help of python gurus here to help solve this my statistics analytics problem.

I want to write a python code that prompts for an imput. Then saves the new input data into a table of numbers at the last position while it deletes the number on the first of the table.Then it search for specified decimal numbers from the table of numbers and gives an output based on a particular condition.

Here is an example of the table.

7.48, 1.11, 1.16, 2.00, 1.49, 3.43, 4.20, 1.04, 20.56, 3.53, 5.61, 2.01, 1.58, 3.91, 3.63, 1.40, 2.03, 1.50, 1.05, 1.01,

According to the table, the first data is 1.01 while the last or most recent data is 7.48.So whenever the code prompts for an inputs, it saves the new input at the top left ( where 7.48 is) and deletes the number at the bottom right ( where 1.01 is) of the table.
So if i input 1.60 as a data, the code executes and the table now becomes like this

1.60, 7.48, 1.11, 1.16, 2.00, 1.49, 3.43, 4.20, 1.04, 20.56, 3.53, 5.61, 2.01, 1.58, 3.91, 3.63, 1.40, 2.03, 1.50, 1.05

Now for the second task the code does. It searches the table for a set of Predetermined decimal numbers. They are basically four but they have variations. For this code, each decimal and its variants are the same. The decimal numbers are 1.3, 1.4, 1.5 and 1.6.
The variants are infinite but must contain the integers that make up the decimal. For example 60.01, 63.1, 12.56, and 180.66 are all variants of 1.6 and will be recognized by the code as 1.6. But 16.0 is not a variant of 1.6 because the integar 6 and 1 aren't separated by a decimal point
Similarly numbers 40.1, 54.81, and 51.49 are all variants of the decimal number 1.4 but 6.14 is not a variant of 1.4 as the integer 1 and 4 aren't separated by a decimal point.

So the code searches for the last three incidence of the decimals/their variants and then checks if there is a number equal or greater than 2 at an equal distance before the decimal number in the last 3 incidences of the decimal. I will explain this better using the table.
If execute the code on the first table above, i expect to get this as one of the outputs for Decimal 1.4 " Decimal 1.4 is checked at distance 1"
Reading the table from the top left to right, the last 3 incident of decimal 1.4 are:

1.49(first row, column five)
1.04(second row column three)
1.40(fourth row, column one)

From left to right, the number that precedes all three decimals (2.00,4.20 and 3.63) is equal or greater than 2 and they are all at the same distance from each of the decimals (1.49, 1.04, 1.40). This distance is counted as 1 because they(2.00,4.20,3.63) are all one step before the Predetermined decimal.( reading data from left to right and up to down).
The code proceeds to check all three incidences of the decimal again at count 2, if there is a number equal or greater than 2 preceding all of them, it will output a result like this " Decimal 1.4 is checked at distance 2". If there is no number equal to or greater than 2 at the same distance for all 3 incidences, it outputs the previous checked result ( if there was a previous result) or it simply says, decimal 1.4 not checked.

Using another example, Reading from top left to right from the first table, the last three incidences of decimal 1.5/variants are at:

I.58(third row column three)
5.61 (third row, column one)
1.05 (fourth row, column four)

There is another decimal 1.5/variant at row four column four but we are only dealing with the last 3 incidence of the decimals we are analyzing. So because we read from top left to right, 1.05 row four isnt part of what we need, so the code leaves it out.

It will give this output as one of its result " Decimal 1.5 is checked at distance 4"
Thats because if we count from from right to left for the 3 most recent decimal 1.5 (1.58, 5.61 and 1.05), exactly at count 4 places, there is a number equal or greater than 2 at the same distance for all numbers.
Hence it satisfies the condition.

Please help me out.if there is a further explanation needed, i will be glad to provide.

Edit* The table was originally in columns and rows but after posting it here it became just two lines of numbers. It still works. The rows and and columns are just for explanation

MongoDB ipv4/ipv6 hex index

I am building a ipv4/ipv6 geo ip MongoDB database and i will have millions (100+) of ips

Structure will of the database will be

[
    { _id: 58fdbf5c0ef8a50b4cdd9a8e , ip: '34.53.63.25', ip_hex: '0x22353f19' , type: "ipv4", data : [
        {
            country : "CA",
            region : "region1"
            city : "city1",
            blacklisted : "no"
            on_date : ISODate("2022-05-05T00:00:00Z")
        },
        {
            country : "CA",
            region : "region1"
            city : "city1",
            blacklisted : "yes"
            on_date : ISODate("2022-06-05T00:00:00Z")
        },
        {
            country : "US",
            region : "region2"
            city : "city2",
            blacklisted : "no"
            on_date : ISODate("2022-05-05T00:00:00Z")
        },

        ...
    ]},

    { _id: 58fdbf5c0ef8a50b4cdd9a8e , ip: '1.2.3.4', ip_hex: '0x1020304', type: "ipv4", data : [
        {
            country : "CA",
            region : "region1"
            city : "city1",
            blacklisted : "no"
            on_date : ISODate("2022-06-05T00:00:00Z")
        },
    ]},

    { _id: 58fdbf5c0ef8a50b4cdd9a8e , ip: '2345:0425:2CA1:0000:0000:0567:5673:23b5', ip_hex: '0x234504252ca1000000000567567323b5', type: "ipv6", data : [
        {
            country : "FR",
            region : "region1"
            city : "city1",
            blacklisted : "no"
            on_date : ISODate("2022-06-05T00:00:00Z")
        },
        {
            country : "FR",
            region : "region1"
            city : "city1",
            blacklisted : "yes"
            on_date : ISODate("2022-07-05T00:00:00Z")
        },

        ...
    ]},
] 

I am converting all IP string data to HEX :

1.1.1.1 -> 0x1010101
1.2.3.4 -> 0x1020304
34.53.63.25 -> 0x22353f19
255.255.255.255 -> 0xffffffff

0001:0001:0001:0001:0001:0001:0001:0001 -> 0x10001000100010001000100010001
1111:1111:1111:1111:1111:1111:1111:1111 -> 0x11111111111111111111111111111111
2345:0425:2CA1:0000:0000:0567:5673:23b5 -> 0x234504252ca1000000000567567323b5
2345:0425:2CA1::0567:5673:23b5          -> 0x234504252ca1000000000567567323b5
ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff -> 0xffffffffffffffffffffffffffffffff

There will be a lot of searches by IP and it will be added/deleted/updated new data to every IP each day.

I will search ranges of ips, sort, update, delete.

What index is recommended on "ip_hex" column? I was thinking about a B-tree search on HEX not STR.

I want to have an efficient database. What other optimizations should i take into consideration?

Thank you.

How To Summarize multiple name into one where the date is equal to now

Hello , i am using vb6.0 can some one help me with my problem. I want to summarize a database with multiple name and display it into one data where the date is same today. thank you for any one would response.

for example. below is a table where multiple names created.

In database

Name            Total Payment           Date
lius--             100---               07/01/22
lius--             100---               07/01/22
lius--             50--                 07/01/22 
era--              60--                 07/03/22
era--              50--                 07/03/22

So when it executed in listview form it would look like on the below sample

Name            Total Payment           Date
lius--             250--                07/01/22
era--              110--                07/03/22

Here is my code for executing my database in listview form

Set rxb = New ADODB.Recordset
rxb.Open "Select * from History order by trn", con, 3, 3

with rxb
    On Error Resume Next
    Do While Not .EOF
        ListView2.ListItems.Add , , !TRN, 1, 1
        ListView2.ListItems(ListView2.ListItems.count).SubItems(1) = "" & !Emp
        ListView2.ListItems(ListView2.ListItems.count).SubItems(3) = "" & !hdate
        ListView2.ListItems(ListView2.ListItems.count).SubItems(4) = "" & !htime
        ListView2.ListItems(ListView2.ListItems.count).SubItems(5) = "" & !purchases
        ListView2.ListItems(ListView2.ListItems.count).SubItems(6) = "" & !payment
        ListView2.ListItems(ListView2.ListItems.count).SubItems(7) = "" & !rBalance
        ListView2.ListItems(ListView2.ListItems.count).SubItems(8) = "" & !adjustment
        ListView2.ListItems(ListView2.ListItems.count).SubItems(9) = "" & !Pm
        ListView2.ListItems(ListView2.ListItems.count).SubItems(10) = "" & !Cashier
        .MoveNext
    Loop
    .Close
End With

Benefits of Elastic Databases And Common Migration Mistakes to Avoid

Introduction

If you're like most data scientist, analysts, or developers, you probably spend a lot of time worrying about your ever-growing store of data. After all, it's the lifeblood of your business. You need to ensure that it's properly stored and accessible when you need it. This article will discuss the benefits of using elastic databases and common migration mistakes to avoid. We'll also provide some tips on optimizing your database for performance.

A recent survey from IDC, found that one-third of enterprises have already migrated or migrated their applications and data centers to public cloud infrastructures such as Amazon Web Services (AWS) or Microsoft Azure. This is an ongoing trend for companies looking to optimize costs while also preparing for future demands on capacity and efficiency.

However, these migrations don't come without their challenges - ensuring that data integrity is maintained during the transfer between environments.

Elastic Databases To Mitigate Data Risks

One method to mitigate these risks is by using elastic database technologies that support the portability and movement of data sets between SQL Server, Oracle, MySQL, or other data stores. This is where elastic databases come into play. An elastic database is a distributed system that makes it easy to store and access large amounts of data by spreading it across many servers, also known as nodes. It provides transparent scalability with no downtime or interruptions. As a result, you can easily use elastic databases to build web applications in the cloud.

Elastic databases are designed for fast retrieval of data using caching, sharding, and partitioning techniques where data is divided into chunks called elastic units. These elastic units are written to separate partitions on different machines to be read independently. Each elastic unit has metadata that contains the type of unit, its owner (which machine it was assigned), and time of creation.

Using elastic units, elastic databases can scale to process more data at once by creating additional partitions and assigning units to them. The concept behind an elastic database is to take the idea of virtualization, which has led to significant efficiencies for server storage and use in data centers and apply it to databases.

By applying this concept, elastic databases can support multiple concurrent users or workloads while delivering strong performance levels. In addition, with this capability, organizations can migrate their entire databases or subsets of data into the cloud, manage it all within a single cluster, and then move that information back to an on-premise infrastructure.

The application of elastic databases has become increasingly popular because they make migration projects simpler, faster and less expensive than previously possible. This is good news for data professionals looking to take advantage of advances in cloud-based infrastructures while maintaining good performance.

The Benefits Of Elastic Databases

Here are some of the key benefits that elastic database technologies offer during migration projects:

Access to Cloud Environments: By moving data sets into the cloud, businesses can achieve faster time-to-market, higher competitive advantage on new products and services, and lower costs through more efficient resource utilization.

Increased Data Portability: Elastic databases can transfer data between cloud-based environments with little to no performance degradation during the process. As a result, organizations can test their applications in the cloud without the risk of disrupting production accounts.

Increased Database Elasticity: The ability to scale out across multiple cloud environments allows for improved performance, higher levels of data availability, and faster time-to-market. Elastic databases also provide load balancing capabilities that enable companies to distribute workloads across systems without building complex infrastructures automatically.

Reduced Risk of Data Loss: When migrating data, organizations can lose information. However, by moving primary databases into the cloud, this risk is significantly reduced because businesses can have their entire database in one place with its information replicated to other locations. Therefore, if anything happened with one database, all others would remain available.

Greater Database Availability: By placing multiple databases within a single cluster, organizations can achieve higher uptime because more systems are available to ensure workloads are always handled regardless of demand. This also means that there is no need to worry about double provisioning or creating environments for testing purposes. As a result, companies can significantly reduce their operating costs because they will spend less time managing databases and more time working with customers.

Better Organizational Agility: Elastic allows organizations to adjust their resource levels without downtime, enabling businesses to achieve faster time-to-market and increase overall productivity by using existing resources better. In addition, they can easily create new development environments for testing purposes which is a big advantage in building and releasing new products and services.

Common Mistakes When Migrating

Migrating your data after elastic has started

Migration to an elastic database should always be done when it's near or at idle. Once you start migrating, you will see where your current limits are for what elastic can ingest so you can plan for the peaks. It is beneficial to have an elastic start-up in production on a low CPU to see how elastic will perform during peak hours.

Excessive vertical scaling

Sometimes you can have too much elasticity in your application architecture. For example, this can happen when you design a system with only one computing unit because you do not want any downtime. Now you have only one server to scale vertically until it reaches maximum capacity.

This approach gives limited availability and no redundancy in hardware failures unless you implement replication or failover systems yourself.

Elastic databases are designed to scale horizontally and perform better that way. So follow the elastic approach and implement a multi-node elastic architecture for your applications.

Replicating instead of failover

In case of failure on one node, replication is often implemented instead of failover for simplicity. However, this approach slows down your application because it has to reset and reload upon recovery, especially if multiple nodes are in production.

Use elastic databases with failover scripts. Your architecture will be highly available and scalable, so you can just add another node in case of hardware failures or traffic peaks.

Not using backup/recovery

Elastic databases offer a reliable way of backing up data with point-in-time recovery. At the same time, elasticity makes storage space nearly infinite on-demand by adding more nodes whenever migrating to elastic databases.

These days, the most commonly-used backup method for elastic databases is simply copying units from one node to another. Backups work very quickly since each unit contains metadata about its size and location on disk, so only changed portions will get copied during a restore. This makes restoring elastic databases reasonably easy if you have recent backups of all your nodes.

Backup/recovery is included in elastic databases, so you don't have to do it manually or with external tools.

The best way to deal with data loss is to prevent it from happening in the first place. Implement backups at different points in time into your application stack, configure elasticity properly, and elastic databases will take care of the rest.

Incorrect server deployments

During elastic database migrations, it is highly recommended to use indexing and create mapping documents for elastic so that no data is lost during migration. During the migration process, elastic should be hosted on separate servers, and elastic should not use the same host as the elasticsearch node. Elastic should always have dedicated volumes and dedicated network interfaces.

Moving all your data at once

One common mistake when moving to elastic databases is moving all data from the start. While this might seem like a good idea, it often leads to the server database lacking the required capacity and performance when new applications need to use elastic databases.

Once the business value has been identified for elastic databases, you need only migrate the necessary data while also ensuring that it does not impact running applications before or during any migration process.

Not using a migration tool

While elastic databases are similar to relational database management systems, many differences exist. This means that converting existing relational technology into elastic technologies can be extremely difficult, if not impossible, given the level of complexity in some organizations.

Therefore, you must convert your data using elastic database tools that are elastic database ready. ShardingSphere's ElasticJob is an excellent example of an open-source tool that helps you migrate your data to a flexible database. ElasticJob is a distributed scheduling solution consisting of two separate projects, ElasticJob-Lite and ElasticJob-Cloud.

ElasticJob-Lite is a lightweight, decentralized solution that provides distributed task sharding services; ElasticJob-Cloud uses Mesos to manage and isolate resources. In addition, it uses a unified job API for each project. As a result, developers only need code one time and can deploy at will.

Conclusion

As more data is generated every day, organizations need to rethink managing their data infrastructure. This is especially true for Database Management Systems (DBMS), which can be complex and time-consuming to maintain.

As a result, companies must consider solutions that offer more flexibility and additional benefits such as lower operating costs and higher levels of availability. Cloud-based providers are offering these types of database technologies. They have been referred to as elastic databases because they can expand with changing business needs that traditional database solutions cannot.

This allows companies to respond more quickly to market demands without sacrificing the levels of service that their customers expect. In addition, organizations can save money when it comes to purchasing server hardware and software licenses because elastic databases are offered as pay-as-you-go services which are very attractive to businesses of all sizes.

This allows companies to use existing budgetary allocations more efficiently and free up time and funds to focus on other areas that can help them grow their business.