My inspiration: the wall-mount and antenna adapter from FTS Hennig.
Unfortunately, the mount is with a price tag of around 50 EUR rather expensive. So I decided to use our new lab 3D-printer and try do design it myself usings AutoDesk’s Fusion 360 software. The result is released here under a creative commons license:
The mount contains two three mounting holes which can be used for screwing it against a wall as well as some cutouts at the bottom for the accessibility of the TS9 antenna, USB-C and Ethernet ports.
My model rendered by AutoDesk Fusion 360.
For the TS9 antenna ports, I am using the following TS-9 to SMA adapters which can be screwed into the respective holes of the mount. This allows a permanent installation of an external 5G/LTA antenna while the router can be easily removed as the adapters align right with the connectors of the router.
This blog posts covers the required steps to gain root access via Telnet on Netgear Nighthawk Mobile 5G/LTE Routers. Its the first post in a small series covering my experiences playing around with this device.
Last month I obtained one of Netgear’s latest mobile 5G routers, the Netgear Nighthawk M5 (model MR5200-100EUS) . Being one of the most expensive consumer 5G routers, I was lucky to get a fairly good second hand deal from eBay.
Gaining root access to the device is actually fairly simple in comparison to rooting modern Android-based devices. The router exposes an open TCP port providing an AT command interface. However, this port is only accessible via a tethered USB connection, not via Wifi.
Using this AT command interface, we can interact with the modem, unlock an extended command set which allows us enable a Telnet daemon.
(More detailed installation instructions are covered in the README file of the repo.)
2. Connect your machine via USB-C to the Netgear router.
3. Make sure to disconnect from the Netgear Wifi.
4. Open a terminal an connect to the AT command interface via netcat (nc). (Make sure not to miss the -c option as it will the enable nc to use the proper CRLF line-endings which are required for the AT interface).
nc -c 192.168.1.1 5510
4. Once connected to the AT command interface, you need to request a unlock challenge code by sending:
AT!OPENLOCK?
The previous command will return a challenge code which we use to generate a corresponding response code via the previously installed sierrakeygen.py tool:
You can now close the AT command session by pressing Ctrl+C.
6. Power-cycle the Netgear Router to start the Telnet daemon.
Voila, you can now telnet into the device via both the tethered USB-C cable or Wifi.
nc -c 172.23.156.129 23
��������
mdm 1623 sdxprairie
/ # uname -a
uname -a
Linux sdxprairie 4.14.117 #1 PREEMPT Thu Aug 19 23:42:26 UTC 2021 armv7l GNU/Linux
Disclaimer: Please be aware that the device security is now breached as all devices connected to the Wifi or USB can gain root access to the device. The root Telnet login requires no password.
Next steps
Before proceeding we should make sure that we can bring the device back to a secure state by replacing the Telnet by an Secure Shell (SSH) daemon. In one of the next posts of this series, I will be building a statically linked version of the Dropbear SSH server to replace Telnet.
Before continuing my reverse engineering efforts on the device, I would like to ensure that I will not brick the router while doing so by dumping the firmware and extract all the details from it. This will allow us to hopefully restore the device by flashing the original firmware. Maybe we will be able to run OpenWRT on it.
Im September letzen Jahres hat sich in Aachen das Open Data Lab mit einer virtuellen Kick-off Veranstaltung gegründet.
Im Open Data Lab wollen wir ehrenamtlich Projekte rund um Offene Daten in Aachen voranbringen. Wir suchen dazu Personen, die daran generell interessiert sind, ob Entwickler*innen, Designer*innen, Datenjournalist*innen aus Verwaltung, Politik und Gesellschaft.
Wir wollen Daten und Ideen zusammenbringen und daraus Projekte generieren.
Was ist Open Data?
Als Open Data werden Daten bezeichnet, die von jedermann zu jedem Zweck genutzt, weiterverbreitet und weiterverwendet werden dürfen. Einschränkungen der Nutzung sind nur erlaubt, um Ursprung und Offenheit des Wissens zu sichern, beispielsweise durch Nennung des Urhebers. Ausgenommen sind personenbezogene Daten sowie Daten, die anderweitig schutzwürdig sind.
Wer kann Mitmachen?
Im Prinzip ist jeder bei uns willkommen. Egal ob du bereits Vorkenntnisse hast oder nicht 🙂
Bürgerinnen
(Kommunal) Politikerinnen
Mitarbeiterinnen der Verwaltung
Wissenschaftlerinnen
Unternehmerinnen
Open Knowledge Labs & Digitales Ehrenamt
Datenjournalisteninnen
Studierende, SchülerInnen und LehrerInnen
Aktuelle Projekte
Die folgende Liste ist eine Übersicht aktueller Projekte, die in Aachen im Umfeld des Open Data Labs mit offenen Daten entwickelt werden:
GoSƐ is a modern and scalable file-uploader focusing on scalability and simplicity. It is a little hobby project I’ve been working on over the last weekends.
The only requirement for GoSƐ is a S3 storage backend which allows to it to scale horizontally without the need for additional databases or caches. Uploaded files a divided into equally sized chunks which are hashed with a MD5 digest in the browser for upload. This allows GoSƐ to skip chunks which already exist. Seamless resumption of interrupted uploads and storage savings are the consequence.
And either way both upload and downloads are always directed directly at the S3 server so GoSƐ only sees a few small HTTP requests instead of the bulk of the data. Behind the scenes, GoSƐ uses many of the more advanced S3 features like Multi-part Uploads and Pre-signed Requests to make this happen.
Users have a few options to select between multiple pre-configured S3 buckets/servers or enable browser & mail notifications about completed uploads. A customisable retention / expiration time for each upload is also selectable by the user and implemented by S3 life-cycle policies. Optionally, users can also opt-in to use an external service to shorten the URL of the uploaded file.
Currently a single concurrent upload of a single file is supported. Users can observe the progress via a table of details statistics, a progress-bar and a chart showing the current transfer speed.
GoSƐ aims at keeping its deployment simple and by bundling both front- & backend components in a single binary or Docker image. GoSƐ has been tested with AWS S3, Ceph’s RadosGW and Minio. Pre-built binaries and Docker images of GoSƐ are available for all major operating systems and architectures at the release page.
GoSƐ is open-source software licensed under the Apache 2.0 license.
De-duplication of uploaded files based on their content-hash
Uploads of existing files will complete in no-time without re-upload
S3 Multi-part uploads
Resumption of interrupted uploads
Drag & Drop of files
Browser notifications about failed & completed uploads
User-provided object expiration/retention time
Copy URL of uploaded file to clip-board
Detailed transfer statistics and progress-bar / chart
Installation via single binary or container
JS/HTML/CSS Frontend is bundled into binary
Scalable to multiple replicas
All state is kept in the S3 storage backend
No other database or cache is required
Direct up & download to Amazon S3 via presigned-URLs
Gose deployment does not see an significant traffic
UTF-8 filenames
Multiple user-selectable buckets / servers
Optional link shortening via an external service
Optional notification about new uploads via shoutrrr
Mail notifications to user-provided recipient
Cross-platform support:
Operating systems: Windows, macOS, Linux, BSD
Architectures: arm64, amd64, armv7, i386
Roadmap
I consider the current state of GoSƐ to be production ready. Its basic functionality is complete. However, there are still some ideas which I would like to work on in the future:
This article describes the necessary steps to run a Xilinx hw_server as a Docker container.
Xilinx’s hw_server is a command line utility which handles JTAG communication between a Xilinx FPGA board and usually the Vivado IDE. It can be used to configure the FPGA bitstream, connect to the embedded logic analyzer cores (ILA) or perform debugging of processor cores via GDB and the Xilinx System Debugger (XSDB). The hw_server is usually used when those tasks shall performed remotely as the connection between Vivado or XSDB is established via TCP connection and allows us to run it on a remote system.
Running the hw_server as a Docker container has the benefit that its installation is simplified to starting a Docker container by running:
It also allows us to run the hw_server on architectures which are not natively supported by Xilinx such as the commonly used Aarch / ARM64 and ARMv7 architectures found in Raspberry Pis.
This is enabled by Dockers support for running container images for non-native architectures. I am using the aptman/qus image to setup this user-mode emulation. qemu-user-static (qus) is a compilation of utilities, examples and references to build and execute OCI images (aka docker images) for foreign architectures using QEMU’s user-mode emulation.
Run the following commands to run the hw_server on a embedded device:
# Install docker
sudo apt-get update && sudo apt-get upgrade
curl -sSL https://get.docker.com | sh
# Start Docker
sudo systemctl enable --now docker
# Enable qemu-user emulation support for running amd64 Docker images
# *Note:* only required if your system arch is not amd64!
docker run --rm --privileged aptman/qus -s -- -p x86_64
# Run the hw_server
docker run --restart unless-stopped --privileged --volume /dev/bus/usb:/dev/bus/usb --publish 3121:3121 --detach ghcr.io/stv0g/hw_server:v2021.2
This setup has been tested with a Raspberry Pi 4 running the new 64-bit Debian Bullseye Raspberry Pi OS.