I recently tackled the challenge of figuring out how to send images to a pair of smart glasses we're developing. The significant hurdle was the lack of direct access to the firmware and extremely limited documentation on available commands or their formats. All I initially had was an outdated APK that no longer worked and some partial source code that only supported basic functionality. This project was entirely new territory for me—I'd never worked with Bluetooth protocols, real-time operating systems (RTOS), or even Android development extensively. Fortunately, the availability of powerful language models (LLMs) made the exploration feasible. ### Initial Attempts I began by sending commands based on the limited documentation, hoping to trigger an image display. These attempts failed completely, giving no response or indication from the glasses that anything had been received—akin to shouting into a void. ### Reverse Engineering Realizing I needed deeper insight, I decompiled the APK to investigate the original implementation. Surprisingly, I discovered the APK didn't contain a working example of sending images. Instead, I found unused code snippets that suggested the capability existed at one point but lacked complete documentation. I noticed commands existed for drawing primitives (lines, rectangles, text) with clear examples, yet nothing similar for images. I reconstructed packet headers by referencing working commands and attempted to modify them to accommodate image data, but initially, this yielded no results. ### Leveraging UUIDs and Subscriptions Next, I shifted my approach toward gaining feedback from the glasses. I learned that when connecting via Bluetooth Low Energy (BLE), devices respond with a list of subscription UUIDs. Unfortunately, these UUIDs lacked descriptive documentation. By systematically logging every UUID interaction, I gradually identified patterns correlating specific UUIDs to particular functionalities. Even though the returned data was raw bytes and challenging to interpret directly, logging helped me detect responses indicating malformed requests. This marked significant progress—I finally had confirmation that the glasses were at least receiving and processing my messages. ### Image Format and Compression Issues Initially, I assumed the issue was image formatting. Since the APK command for image drawing was unused, I had no guidance on the expected image format. Additionally, the image underwent compression via a native binary library written in C—completely opaque to my investigation efforts. Unable to decipher the compression algorithm through online searches, I extracted and incorporated the native binary directly into my workflow, hoping to correctly mimic the required compression step. Despite my efforts, repeated "malformed request" errors continued. ### Breakthrough with Header Adjustment I revisited the header configurations and realized I had mistakenly used a "draw canvas" command instead of the correct "raw image" command. Correcting this mistake resulted in the glasses returning a success message—my first tangible indication that the protocol was partially correct. However, despite receiving success confirmations, no image appeared on the glasses. Suspecting color issues, I experimented by switching the bitmap color from black to white. Immediately, a visible line appeared on the glasses' screen. ### Current Status and Next Steps This result was extremely encouraging, as it proved communication was functioning correctly, albeit partially. Currently, the images sent are displayed incorrectly, likely due to unresolved formatting or compression issues. With a functional baseline established, debugging and iterative experimentation should now be significantly easier. A side issue—image rendering interfering with subsequent navigation—likely points to internal firmware constraints beyond my current control. Overall, I'm optimistic about further progress now that there's visible confirmation of successful communication with the glasses.