Serial Key Sensor to MIDI Hub-interface

Developing an 88Key acoustic piano sensor array. On/off switches

Objective is to connect to Midi hub.

  • 8 quantity/ single octave sensors strips connected to bread board,
    then send that data to Midi Hub, for passthrough to Chordie.

Is there any potential to map these sensor strips in the software for easy configuration?

And any other issues to implement this solution? i.e. obstacles or components that would have to be developed

You’ll have to use some sort of a controller, like Arduino, with enough GPIOs, running software to read out the switch values, and encode the performance events into a MIDI serial stream to send to Midihub’s DIN-5 inputs.

That is my plan. 88 sensors, 3-4 arduino’s in parallel-to output.

Sensors will be more challenging to get customization and speed/reliability of Sensor., for installation.

Programming with the midi hub editor- Do you see any roadblocks or challenges for setting up the stream output? using the software?

Do you have an example of how this would be setup in the Blokas Editor!
sequence:

  1. Sensor
  2. physical layer, thin tape to sensor input board,
  3. Blokas software
  4. conversion to midi signal
  5. output through USB or ethernet to windows software.
    Rgds,

A

The conversion to serial MIDI would happen on Arduino’s software side, Midihub would receive the MIDI stream, and then you can use Midihub for further processing using the pipes available, and/or forward it to the computer as a USB MIDI stream.

Sensor design final rough draft, Should have a software designer sourced very soon.

Much appreciated!! I get the logic now

1 Like

As much as I love midihub, can I ask why you would go to a midihub from the microcontrollers instead of just programming the microcontrollers and using your own midi/usb hardware? And why 3-4 arduinos? 1 teensy 4.1 with some mux’s should be ample to cover your sensor needs. Even using the MIdihub, you’ll basically have to fully build a midi device to send midi to the midihub. Depending on how complex you want your midi device you could use AI instead of hiring a programmer., AI coding is extremely capable in the latest versions. You’ll have to understand some level of coding( which the AI can teach you as you build) to clean up errors but it is excellent for a task like programming a midi device. Especially if you are looking more for the basic midi protocols, give it the right prompts and it will whip up code for you in an instant.

I started tinkering around with teensy to build my own custom midi device. When I first tried AI coding to assist it was awful. But the advancements made in a very short time are mind blowing. With AI help, the software for the device I’m building is crazier than I could even dream of. I think of an over the top idea and it gives me a path forward. It usually takes a bit of reworking to make it come to life, but I would never have been able to create such complex code with AI assistance. If you are just doing basic midi though, you could probably create the software for your device in a few days.

1 Like

What AI Platform? are you using.

88 musical instrument sensors for output to Midi Hub,
Midi Hub is a passthrough to other external Hardware/Software presentation of notes/ music being played on instrument. I don’t want to build/support that functionality in house.
Midi-hub is great off-shelf HW/SW that can be packaged with the solution.

I tried a bunch. The main one I’m using is ChatGPT4. And I supplement with Google Gemini. I find that Gemini advanced is a much better problem solver than ChatGPT but Chatgpt is way better for printing out proper syntax. I started off with Copilot pro which at first was incredible for coding but Microsoft did something with it and it’s just not as useful anymore. But one thing that I’ve found is when I run into a problem I can’t lockdown, I’ll. try them all and Copilot will usually give an insight to the problem that the others didn’t even seem to identify, which is odd since Copilot also uses ChatGPT4. ChatGPT seems to be more straightforward when you ask it do something, and gives you a brief overview of code or changes it printed. Gemini does a much better explanation of the code it creates so that you can understand it on a fundamental level. You can get that same level of explanation from ChatGpt but you have ask for specifics.