i own a website for my business and i really like to understand and implement security practices for my website.
Anti-Hotlinking Script for WP on Apache (.htaccess) – Linkspam Prevention
Never published this before, so this is a DaniWeb.com Exclusive :)
If your WP-Site has a lot of K-Links, you should consider using this script.
It definetly works. For now...
You can not defend your site against all kinds of attacks, but on one of the most common, you can significantly decrease the negative effects:
"K-Links" (new version: C-Links), where Image Hotlinking is used to generate Links, targeting mainly Wordpress Instances.
Examples:
k-links.png
this is why they're called "K-Links/C-Links". They always end with "-k.html" oder "-c.html"
The basic Anti-Hotlinking-Script can help in reducing the amount of traffic, when hotlinking is abused to burn your bandwith.
But i have never seen it recover any visibility losses in the SERPs.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\\.)?daniweb.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\\.)?google.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\\.)?bing.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\\.)?yahoo.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\\.)?duckduckgo.com [NC]
RewriteRule \\.(jpg|jpeg|png|gif|avif|webp|svg)$ /nohotlink.html [L]
</IfModule>
---
content nohotlink.html:
<body>
<h1>Hotlinking not allowed</h1>
<p>Too view our images, please visit our <a href="<https://daniweb.com/>">Website</a>.</p>
</body>
It integrates the "Whitelist" directly into .htaccess, which is not optimal.
I had a case, where this caused problems, because the Whitelist was huge (1000+ Domains).
So i found this solution with "RewriteMap", which i integradted into this Script to put the whitelist inside a .txt file.
This also was easier for the client, as he might needs to add entries to the whitelist and like this does not have to edit the htaccess everytime.
I have also set the link inside the HTML to rel="nofollow".
I did get some nice results with this!
Even if there are still other DoFollow-Links on the Hotlinking Site, the presence of this one nofollow-link seems to reduce the toxicity of each one.
Important: Dont link the actual Canonical URl from your Main Page from nohotlink.html!
If your Domain is https://daniweb.com for example, you link to http://www.daniweb.com (with "www" and "http").
I experimented a lot with this and set the Canonical of the nohotlink.html to the Main Page, tested with noindex, nofollow robots tag, but it was all a mess.
If anybody is as deep into this stuff as i am, i will be happy to discuss.
Feel free to share your thoughts!
Disclaimer: Please use at your own risk, only if you know, what you are doing. Don't make me responsible, if you make mistakes. They are yours, not mine.
The New “Learn WordPress” Launches. Here’s What I Like About It
It’s Time To Talk About “CSS5”
We have been talking about CSS3 for a long time. Call me a fossil, but I still remember the new border-radius
property feeling like the most incredible CSS3 feature. We have moved on since we got border-radius
and a slew of new features dropped in a single CSS3 release back in 2009.
CSS, too, has moved on as a language, and yet “CSS3” is still in our lexicon as the last “official” semantically-versioned release of the CSS language.
It’s not as though we haven’t gotten any new and exciting CSS features between 2009 and 2024; it’s more that the process of developing, shipping, and implementing new CSS features is a guessing game of sorts.
We see CSS Working Group (CSSWG) discussions happening in the open. We have the draft specifications and an archive of versions at our disposal. The resources are there! But the develop-ship-implement flow remains elusive and leaves many of us developers wondering: When is the next CSS release, and what’s in it?
This is a challenging balancing act. We have spec authors, code authors, and user agents working both interdependently and independently and the communication gaps are numerous and wide. The result? New features take longer to be implemented, leading to developers taking longer to adopt them. We might even consider CSS3 to be the last great big “marketing” push for CSS as a language.
That’s what the CSS-Next community is grappling with at this very moment. If you haven’t heard of the group, you’re not alone, but either way, it’s high time we shed light on it and the ideas coming from it. As someone participating in the group, I thought I would share the conversations we’re having and how we’re approaching the way CSS releases are communicated.
Meet The CSS-Next CommunityBefore we formally “meet” the CSS-Next group, it’s worth knowing that it is still officially referred to as the CSS4 Community Group as far as the W3C is concerned.
And that might be the very first thing you ought to know about CSS-Next: it is part of the W3C and consists of CSSWG members, developers, designers, user agents, and, really, anyone passionate about the web and who wants to participate in the discussion. W3C groups like CSS-Next are open to everyone to bring our disparate groups together, opening opportunities to shape tomorrow’s vision of the web.
CSS-Next, in particular, is where people gather to discuss the possibility of raising awareness of CSS evolutions during the last decade. At its core, the group is discussing approaches for bundling CSS features that have shipped since CSS3 was released in 2009 and how to name the bundle (or bundles, perhaps) so we have a way of referring to this particular “era” of CSS and pushing those features forward.
Why We Need A Group Like CSS-NextLet’s go back a few years. More specifically, let’s return to the year 2020.
It all started when Safari Evangelist Jen Simmons posted an open issue in the CSSWG’s GitHub repo for CSS draft specifications requesting a definition for a “CSS4” release.
This might be one of the biggest responses — if not the biggest response — to a CSSWG issue based solely on emoji reactions.
The idea of defining CSS4 had some back-ups by Chris Coyier, Nicole Sullivan, and PPK. The idea is to push technologies forward and help educators and site owners, even if it’s just for the sake of marketing.
But why is this important? Why should we care about another level or “CSS Saga”? To get to that point, we might need to talk about CSS3 and what exactly it defines.
What Exactly Is “CSS3”?The CSS3 grouping of features included level-3 specs for features from typography to selectors and backgrounds. From this point on, each CSS spec has been numbered individually.
However, CSS3 is still the most common term developers use to define the capabilities of modern CSS. We see this across the web, from the way educational institutions teach CSS to the job requirements on resumes.
The term CSS3 loses meaning year-over-year. You can see the dilution everywhere. The earliest CSS3 drafts were published in June 1999 — before many of my colleagues were even born — and yet CSS is one of the fastest-growing languages in the current webscape.
What About The CSS3 Logo?When we look at job postings, we run into vacancies asking for knowledge of CSS3, which is over 10 years old. Without an updated level, we’re just asking if you’ve written CSS since the border-radius
property came out. Furthermore, when we want to learn CSS, a CSS3 logo next to educational materials no longer signals current material. It kind of feels like time has stood still.
Here’s an example job posting that illustrates the issue:
But that’s not all. If you do a Google search on “Learn CSS” and check the images, you might be surprised how many CSS3 logos you can spot:
About 50% of the images show the CSS3 badge. To me, this clearly signals:
- People want badges or logos to aid in signaling skills.
- The CSS3 brand has made a large impact on the web ecosystem.
- The CSS3 logo has reached the end of its efficacy.
CSS3 had still has a huge impact on the ecosystem. The same logo is trying to say it teaches Flexbox all the way to color-mix()
— a spread of hundreds of CSS features.
CSS3 and HTML5 were big improvements to those respective languages — we’ve come a long way since then. We have features that people didn’t even think were possible back in 2012 (when we officially spoke of CSS3 as a level).
For example, there was a time when people thought that containers didn’t know anything and it never be possible to style an element based on the width of its parent. But now, of course, we have CSS Container Queries, and all of this is possible today. The things that are possible with CSS changed over time, as so beautifully told by Miriam Suzanne at CSS Day 2023.
We do not want to ignore the success of CSS3 and say it is wrong; in fact, we believe it’s time to repeat the tremendous success of CSS3.
Imagine yourself 10 years from now reading a “modern” CSS feature that was introduced as many as 10 years ago. It wouldn’t add up, right? Modern is not a future-proof name, something that Geoff Graham opined when asking the correct question, “What exactly is ‘Modern CSS’?”
“Naming is always hard, yet it’s just something we have to do in CSS to properly select things. I think it’s time we start naming [CSS releases] like this, too. It’s only a matter of time before “modern” isn’t “modern” anymore.”
— Geoff Graham
This is exactly where the CSS-Next community group comes in.
Let’s Talk About “CSS Eras”The CSS-Next community group aims to align and modernize the general understanding of CSS in the wider developer community by labeling feature sets that have shipped since the initial set of CSS3 features, helping developers upskill their understanding of CSS across the ecosystem.
Why Isn’t This Part Of The Web Platform Baseline?
The definition of what is “current” CSS changes with time. Sometimes, specs are incomplete or haven’t even been drafted. While Baseline looks at the current browser support of a feature in CSS, we want to take a look at the evolution of the language itself. The CSS levels should not care about which browser implemented it first.
It might be more nuanced than this in reality, but that’s pretty much the gist. We also don’t want it to become another “modern CSS” bucket. Indeed, referring to CSS3 as an “era” has helped compartmentalize how we can shift into CSS4, CSS5, and beyond. For example, labeling something as a “CSS4” feature provides a hint as far as when that feature was born. A feature that reaches “baseline” meanwhile merely indicates the status of that feature’s browser implementation, which is a separate concern.
Identifying features by era and implementation status are both indicators and provide meta information about a CSS feature but with different purposes.
Why Not Work With An Annual Snapshot Instead Of A Numbered Era?
It’s fair to wonder if a potential solution is to take a “snapshot” of the CSS feature set each year and use that as a mile marker for CSS feature releases. However, an annual picture of the language is less effective than defining a particular era in which specific features are introduced.
There were a handful of years when CSS was relatively quiet compared to the mad dash of the last few years. Imagine a year in which nothing, or maybe very few, CSS features are shipped, and the snapshot for that year is nearly identical to the previous year’s snapshot. Now imagine CSS explodes the following year with a deluge of new features that result in a massive delta between snapshots. It takes mental agility to compare complete snapshots of the entire language and find what’s new.
Goals And Non-GoalsI think I’ve effectively established that the term “CSS” alone isn’t clear or helpful enough to illustrate the evolution of the CSS, just as calling a certain feature “modern” degrades over time.
Grouping features in levels that represent different eras of releases — even from a marketing standpoint — offers a good deal of meaning and has a track record of success, as we’ve seen with CSS3.
All of this comes back to a set of goals that the CSS-Next group is rallying around:
- Help developers learn CSS.
- Help educators teach CSS.
- Help employers define modern web skills.
- Help the community understand the progression of CSS capabilities over time.
- Create a shared vernacular for describing how CSS evolves.
What we do not want is to:
- Affect spec definitions.
CSS-Next is not a group that would define the working process of or influence working groups such as the CSSWG. - Create official developer documentation.
Making something like a new version of MDN doesn’t get us closer to a better understanding of how the language changes between eras. - Define browser specification work.
This should be conducted in relevant standardization or pre-standardization forums (such as the CSSWG or OpenUI). - Educate developers on CSS best practices.
That has much more to do with feature implementations than the features themselves. - Manage browser compatibility data.
Baseline is already doing that, and besides, we’ve already established that feature specifications and implementations are separate concerns.
This doesn’t mean that everything in the last list is null and void. We could, for example, have CSS eras that list all the features specced in that period. And inside that list, there could be a baseline reference for the implementations of those features, making it easier to bring forward some ideas for the next Interop, which informs Baseline.
This leaves the CSS-Next group with a super-clear focus to:
- Research the community’s understanding of modern CSS,
- Build a shared understanding of CSS feature evolution since CSS3,
- Grouping those features into easily-digestible levels (i.e., CSS4, CSS5, and so on), and
- Educate the community about modern CSS features.
We’d Likely Start With The “CSS5” Era
A lot of thought and work has gone into the way CSS is described in eras. The initial idea was to pick up where CSS3 left off and jump straight into CSS4. But the number of features released between the two eras would be massive, even if we narrowed it down to just the features released since 2020, never mind 2009.
It makes sense, instead, to split the difference and call CSS4 a done deal as of, say, 2018 and a fundamental part of CSS in its current state as we begin with the next logical period: CSS5.
Here’s how the definitions are currently defined:
CSS3 (~2009-2012):
Level 3 CSS specs as defined by the CSSWG. (immutable)
CSS4 (~2013-2018):
Essential features that were not part of CSS3 but are already a fundamental part of CSS.
CSS5 (~2019-2024):
Newer features whose adoption is steadily growing.
CSS6 (~2025+):
Early-stage features that are planned for future CSS.
We released a request for comments last May for community input from developers like you. We’ve received a few comments that have been taken into account, but we need much more feedback to help inform our approach.
We want a big representative response from the community! But that takes awareness, and we need you to make that happen. Anything you can do to let your teams and colleagues that the CSS-Next group is a thing and that we’re trying to solve the way we talk about CSS features is greatly appreciated. We want to know what you and others think about the things we’re wrestling with, like whether or not the way we’re grouping eras above is a sound approach, where you think those lines should be drawn, and if you agree that we’re aiming for the right goals.
We also want you to participate. Anyone is welcome to join the CSS-Next group and we could certainly use help brainstorming ideas. There’s even an incubation group that conducts a biweekly hour-long session that takes place on Mondays at 8:00 a.m. Pacific Time (2:00 p.m. GMT).
On a completely personal note, I’d like to add that I joined the CSS-Next group purely out of interest but became much more actively involved once the mission became very clear to me. As a developer working in an agency, I see how fast CSS changes and have struggled, like many of you, to keep up.
A seasoned colleague of mine commented the other day that they wouldn’t even know how to approach vanilla CSS on a fresh website project. There is no shame in that! I know many of us feel the same way. So, why not bring it to marketing terms and figure out a better way to frame discussions about CSS features based on eras? You can help get us there!
And if you think I’m blameless when it comes to talking about CSS in generic “modern” terms, all it takes is a quick look at the headline of another Smashing article I authored… this year!
Let’s get going with CSS5 and spread the word! Let me hear your thoughts.
Resources
Extract Schema.org Data Script (Python)
Maybe this is helpful for somebody...
This script extracts Schema.org data from a given URL and saves it to a file.
- Run the Script: Execute the script in a Python environment.
- Input URL: Enter the URL of the webpage (without 'https://') when prompted.
- Output: The extracted data is saved in schema_data.txt.
- Extracts JSON-LD data from webpages.
- Identifies and counts schema types and fields.
- Saves formatted data along with metadata to a file.
-
Python libraries: requests, beautifulsoup4.
# extract_schema_data.py # Author: Christopher Hneke # Date: 07.07.2024 # Description: This script extracts Schema.org data from a given URL and saves it to a file. import requests from bs4 import BeautifulSoup import json import os from collections import defaultdict # Function to extract Schema.org data from a given URL def extract_schema_data(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') schema_data = [] schema_types = set() field_count = defaultdict(int) # Recursive helper function to extract types and field frequencies from JSON data def extract_types_and_fields(data): if isinstance(data, dict): if '@type' in data: if isinstance(data['@type'], list): schema_types.update(data['@type']) else: schema_types.add(data['@type']) for key, value in data.items(): field_count[key] += 1 extract_types_and_fields(value) elif isinstance(data, list): for item in data: extract_types_and_fields(item) # Look for all <script> tags with type="application/ld+json" for script in soup.find_all('script', type='application/ld+json'): try: json_data = json.loads(script.string) schema_data.append(json_data) extract_types_and_fields(json_data) except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}") return schema_data, schema_types, field_count # Function to format Schema.org data for readable output def format_schema_data(schema_data): formatted_data = "" for data in schema_data: formatted_data += json.dumps(data, indent=4) + "\n\n" return formatted_data # Function to get the meta title of the page def get_meta_title(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') title_tag = soup.find('title') return title_tag.string if title_tag else 'No title found' # Function to save extracted data to a file def save_to_file(url, title, schema_types, formatted_data, field_count, filename='schema_data.txt'): try: with open(filename, 'w', encoding='utf-8') as file: file.write(f"URL: {url}\n") file.write(f"TITLE: {title}\n") file.write(f"SCHEMA TYPES: {', '.join(schema_types)}\n\n") file.write("Field Frequencies:\n") for field, count in field_count.items(): file.write(f"{field}: {count}\n") file.write("\nSchema Data:\n") file.write(formatted_data) print(f"Schema.org data successfully saved to {filename}") except Exception as e: print(f"Error saving to file: {e}") # Main function to orchestrate the extraction and saving process def main(): url_input = input("Please enter the URL without 'https://': ") url = f"https://{url_input}" schema_data, schema_types, field_count = extract_schema_data(url) if not schema_data: print("No Schema.org data found.") return meta_title = get_meta_title(url) formatted_data = format_schema_data(schema_data) save_to_file(url, meta_title, schema_types, formatted_data, field_count) if __name__ == "__main__": main()
Extract and Count Reviews/AggregateRating Script (Python)
This script was basically the concept for a similar WP Plugin, which automatically counts the amount of all single product ratings in each category and writes the correct amount of total reviews in a category on the category pages "aggregate rating" Schema.org Markup.
We had a case, where this was the optimal solution to display the correct amount of "aggregate rating" in "Recipe Rich Results" for a Foodblog/Recipe-Website.
As for today, Google does not seem to give to much attention, but there are indicators showing, the math is getting more important.
This script extracts the total number of reviews from all categories listed in a sitemap and saves the results to a file. It is specifically designed to work with webpages where review counts are displayed in a specific format (e.g., "(123)").
Run the Script: Replace placeholders (https://example.com/category-sitemap.xml) with the actual URL of the sitemap. Execute the script in a Python environment.
Output: The total reviews per category are saved in result.txt.
Python libraries: requests, beautifulsoup4, re.
Review Format: This script is suitable for webpages where the number of reviews is enclosed in parentheses, such as "(123)". It uses a regular expression to identify and extract these numbers.
# scrape_review_count.py
# Author: Christopher Hneke
# Date: 04.08.2024
# Description: This script extracts the total number of reviews from all categories listed in a sitemap and saves the results to a file.
# Description: It is specifically designed to work with webpages where review counts are displayed in a specific format (e.g., "(123)").
import requests
from bs4 import BeautifulSoup
import re
# Function to get the total number of reviews from a category URL
def get_total_reviews(url):
total_reviews = 0
page_number = 1
review_pattern = re.compile(r'\((\d+)\)')
while True:
page_url = f"{url}/page/{page_number}/" if page_number > 1 else url
response = requests.get(page_url)
if response.status_code == 404:
break
soup = BeautifulSoup(response.content, 'html.parser')
page_reviews = soup.find_all(string=review_pattern)
if not page_reviews:
break
for review_text in page_reviews:
match = review_pattern.search(review_text)
if match:
total_reviews += int(match.group(1))
page_number += 1
return total_reviews
# Main function to process the sitemap and extract reviews for each category
def main():
sitemap_url = 'https://example.com/category-sitemap.xml' # Replace with the actual sitemap URL
response = requests.get(sitemap_url)
soup = BeautifulSoup(response.content, 'xml')
categories = soup.find_all('loc')
results = []
for category in categories:
category_url = category.text
category_name = category_url.split('/')[-2]
print(f"Processing category: {category_name}")
total_reviews = get_total_reviews(category_url)
results.append(f"{category_name}: {total_reviews} reviews\n")
with open('result.txt', 'w', encoding='utf-8') as file:
file.writelines(results)
print("Results saved to result.txt")
if __name__ == '__main__':
main()
How to Add a Header to a curl Request
curl
is one of those great utilities that’s been around seemingly forever and has endless use cases. These days I find myself using curl
to batch download files and test APIs. Sometimes my testing leads me to using different HTTP headers in my requests.
To add a header to a curl
request, use the -H
flag:
curl -X 'GET' \ 'https://nft.api.cx.metamask.io/collections?chainId=1' \ -H 'accept: application/json' \ -H 'Version: 1'
You can add multiple headers with multiple -H
uses. Header format is usually [key]: [value]
.
The post How to Add a Header to a curl Request appeared first on David Walsh Blog.
Flipper Zero Review: A Geeky Multi-Tool for Penetration Testing
A geeky multi-tool capable of hacking into Wi-Fi networks and opening Tesla car charging ports has been making headlines recently. I've familiarized myself with Flipper Zero and performed basic penetration testing on my own network and system. In this post, I share the results.
According to its makers, Flipper Zero is "a portable multi-tool for pentesters and geeks". It can capture infrared signals, emulate NFC chips, read RFID tags, execute scripts via BadUSB, and much more. Almost four years after its release, parts of the community are still uncertain whether Flipper is just a glorified universal remote control, a dangerous hacking tool that governments should seek to ban, or simply the Leatherman of penetration testing.
I wanted to find out for myself and bought a Flipper a few weeks ago. Now it's time to share my first experiences. This article seeks to clarify the capabilities and limitations of Flipper Zero, so that you can evaluate whether it's worth the couple of hundred bucks in your individual case. Additionally, I'll introduce you to basic penetration testing with the WiFi Devboard and Marauder firmware.
One important note: How much you can really do with Flipper Zero depends entirely on your skills. It's certainly a good companion for deepening your understanding of the electromagnetic spectrum and computer networking basics. Anything that could be described as "serious hacking purposes" will require a specific skillset, additional software and, depending on what exactly you're trying to achieve, other equipment.
The official website provides comprehensive documentation on how to get started with your Flipper Zero. Hence, I'll focus on things that you can try out right away once you've inserted the Micro SD card, updated the firmware, and installed the qFlipper app on your desktop or mobile device.
Things to do with your Flipper Zero:
- Read and replicate the signals of all your remote controls
- Try to replicate your electronic car keys and replace them if it works (i.e., they're not protected)
- Check the RFID chips of your pets
- Backup your NFC tags (e.g., phones, cards, keycards)
- Use the universal remote on your devices
- Generate U2F tokens to manage secure access to your accounts
- Use the built-in GPIO pins for a multitude of hardware-related tasks and experiments
- Run a BadUSB demo on your PC or Mac and write your own scripts
Flipper Zeros interface reminds of an old Nokia phone
In terms of handling, the 10x4 cm (4x1.6 in) device is controlled by a simple, old-fashioned interface and an intuitive menu that will resonate with anyone who already was around during the Nokia era. However, if you don't like pressing real buttons, you can navigate the menu and control your Flipper with the app (requires Bluetooth).
While you're not using your Flipper, the device will display scenes from the life of a pixel-style dolphin, which you can level up by reading and emulating signals (does not impact functionality). This slightly tacky feature also turns the multi-tool into a Tamagotchi for geeks.
To interact with Wi-Fi networks, you'll need a devboard that can be connected via the GPIO pins. The next section of the article takes a closer look at how to use the Wi-Fi devboard with Flipper Zero.
With the Wi-Fi devboard and Marauder firmware, Flipper can sniff on networks and launch different attacks
To use the Wi-Fi module as described below, you'll first need to perform a firmware update and then flash the devboard with the Marauder firmware. Once you've installed the companion app on your Flipper, you're good to go.
You can access the controls in the Apps folder under "GPIO". Once there, you should first scan for Wi-Fi access points near you. This will provide you with a list of all networks around, including their names and corresponding MAC addresses.
NOTE: Only perform the following steps on your own networks for the purpose of penetration testing! Never attack networks that are not your own, as this would be illegal.
Once you have the list of Wi-Fi networks, you can select the network that you want to "attack". Marauder offers different attack modes. The simplest one is to deauthorize all devices connected to the Wi-Fi. If you execute this attack, you'll notice that all devices connected to your Wi-Fi network are automatically disconnected for a moment and have to reconnect.
Another attack mode is called "rickroll". If you execute it, a long list of fake access points is created displaying Rick Astley's song Never Gonna Give You Up line-by-line.
A rather harmless example of what you can do with the Marauder: Rickrolling networks with fake Wi-Fi access points
However, the Marauder firmware also enables more serious attacks that are great for penetration testing. The most basic method is sniffing authentication data. As explained in more detail in this video, you can sniff on a network while a device reconnects after being deauthorized, and then you can use simple freeware and a password list to decrypt the network credentials (i.e., the password). Of course, this method only works on unsafe passwords, and a simple way to protect yourself is to choose a secure Wi-Fi password (at least 12 characters with a combination of uppercase, lowercase, numbers, and symbols).
Combined, the Wi-Fi board and Marauder app can be used for various other purposes e.g., launching an "evil portal" that phishes login credentials, setting up a mobile wardrive, or reading GPS data. Would you like to hear more about any of those features? Let me know in the comments!
While a Flipper Zero certainly won't give you magical hacking powers, it is a great (learning) tool for all those interested in secure communication and networking. It actually seems fair to think of it as the "Leatherman of pentesting". A Leatherman clearly isn't the best knife, the best screwdriver, or the best saw. But it includes the basic functionality of all those tools in a practical form. Similarly, Flipper Zero is a versatile multi-tool that allows you some serious MacGyvering if you possess the necessary skills. One last thing I want to point out is the surprisingly strong battery life. After dozens of hours of tinkering and many more in standby (with Bluetooth on), my Flipper's battery is still 98% charged on the first charge. However, besides the loading capacity the battery also seems to be an Achilles heel, as some users report issues with swollen power cells.
In this article, I've only scratched the surface of the many functionalities Flipper Zero offers. There's an ever-growing list of apps and add-ons, alongside an active community of people discovering new ways of using Flipper on a daily basis. For electronics geeks, the GPIO pins allow them to develop their own modules. Antennas can be used to greatly amplify the strength of infrared signals and the Wi-Fi board. There's much more to discover and I'm looking forward to the next experiment.