CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Results 1 to 9 of 9
  1. #1
    Join Date
    Mar 2018
    Posts
    5

    Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    Hello! This is my first post here.

    Im developing a windows application which serves the function of being a remote midi controller for a hardware sound synthesizer that I designed and built. I am using a mix of managed and unmanaged code. The way the software works is that it communicates with the hardware synthesizer through a bidirectional MIDI connection as in, the midi output of the synth is connected to the midi in of the midi interface on the computer, and the midi input of the synth is connected to the output of the computer interface (the good old 5 pin DIN connectors interface). The synthesizer is also able to receive midi note data, so I usually open a music sequencer software and send midi notes to the same midi interface port on which the synth is connected.

    This works wonderfully with a couple of midi interfaces I have, which seem to have friendly drivers that allow the same midi port to be open in several applications simultaneously. However, using a third hardware midi interface i have here, it exhibits a different behavior - the first application which opens the port seems to get exclusive access of it and no further applications can open the same port (errors something about "insufficient memory"), so I figured this is very driver dependent.

    As a work around, I thought that with the help of a virtual midi driver AND implementing a second midi input in my windows application, I could then setup the midi sequencer software so that it outputs to a virtual midi driver which would be opened in my application controller software and forward the messages to the midi output along with the communication data with which the synth talks to my software and viceversa.

    Basically its like an idea of a software MIDI merger.

    The way I have my software working now is that i have a MidiInProc callback which receives all the midi data on one of the ports, and a second callback for the second midi input port. Both callbacks SendMessage's to the main wndproc where i do stuff with the received midi data, like update the on-screen controls and stuff. The second midi input where the midi sequencer software is sending the data, generates a replica of the input data and sends it to the midi output, sort of a software midi thru. This works fine, however if the midi data that is received if pretty busy with lots of notes or midi controllers, or alike scenarios, if I move the actual window of my application, on the screen with the mouse, the running midi thru data gets stutter, as well the movement of the window onscreen is all laggy because the form is pretty busy receiving the messages from the callbacks..

    A second solution I figured could be a backgroundworker which would run continuously the midi forwarding routines. But with my way of thinking, to have near 0 latency i would have to run that as fast it possibly is so that no midi input data is lost, which essentially would load up one of the processors cores to full load.

    I was wondering what would be a proper way of implementing this without creating a while(1) loop running in a separate backgroundworker or thread, which essentially would load a core of the processor to 100%.

    Sorry for the long post, but if anyone can point me in any directions, I would greatly appreciate it!

    Best regards,
    Konstantin

  2. #2
    2kaud's Avatar
    2kaud is offline Super Moderator Power Poster
    Join Date
    Dec 2012
    Location
    England
    Posts
    7,822

    Re: Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    You might find this to be of interest.

    http://flounder.com/workerthreads.htm
    All advice is offered in good faith only. All my code is tested (unless stated explicitly otherwise) with the latest version of Microsoft Visual Studio (using the supported features of the latest standard) and is offered as examples only - not as production quality. I cannot offer advice regarding any other c/c++ compiler/IDE or incompatibilities with VS. You are ultimately responsible for the effects of your programs and the integrity of the machines they run on. Anything I post, code snippets, advice, etc is licensed as Public Domain https://creativecommons.org/publicdomain/zero/1.0/ and can be used without reference or acknowledgement. Also note that I only provide advice and guidance via the forums - and not via private messages!

    C++23 Compiler: Microsoft VS2022 (17.6.5)

  3. #3
    Join Date
    Mar 2018
    Posts
    5

    Re: Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    Thank you very much! I'll look into it in detail!

    Regards,
    Konstantin

    Quote Originally Posted by 2kaud View Post
    You might find this to be of interest.

    http://flounder.com/workerthreads.htm

  4. #4
    Join Date
    Jun 2010
    Location
    Germany
    Posts
    2,675

    Re: Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    You are not showing any of your code, so I need to guess to some degree.

    It sounds like you are forwarding every single MIDI event in an individual Windows message. With lots of incoming events, this imposes a heavy load on your receiving message loop and the Windows message dispatching system. It may be better to buffer incoming events in your receiver callback until a certain count of events is reached or a timeout of a few milliseconds elapses, and forward them bundled, multiple events in a single Windows message. This could drastically reduce the system load caused by MIDI event handling.

    Yes, this would mean intentionally introducing some extra latency, which is generally unwanted in a live MIDI processing context. But it may be possible to reduce or even effectively hide that latency by some algorithmic refinement. Also, the extra latency may be less relevant if transporting MIDI events in Windows messages is just relevant for your program's GUI and the critical processing is done off that path (which is a good idea anyway). I already have some ideas in mind in that context, but don't want to go into too much detail, as long as I still might be on the wrong track entirely.
    I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.

    This is a snakeskin jacket! And for me it's a symbol of my individuality, and my belief... in personal freedom.

  5. #5
    Join Date
    Mar 2018
    Posts
    5

    Re: Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    Hello! Thank you for the reply!
    Sorry bout the no code issue. The project is kind of a mess right now as I have to refactor everything in separate files (everything is in single file now and its big )
    I spent a whole bunch of time trying to implement the autoresetevent to block a thread until there is
    midi data available, mainly because of my lack of visual c/c++/c# programming skills, I couldnt make my global midi input callback access the autoresetevent object which is part of a class. As the project is a mix of managed and unmanaged code (I know its not good practice but its what i was able to put together and it works), I could not declare the autoresetevent as a global cause it was complaining about that I cannot make a ref class as a global, and my lack of skills did not let me get my callback access that variable which is part of a class (spent a whole big bunch of time reading and looking for different ways to do it and could not get my head around it). I did, however, managed to implement the autoresetevent control from a button click event without issues. The issue was and still is, is getting access to the Set() method of the autoresetevent class, inside of the global void CALLBACK.

    I partially resolved the issue yesterday by taking another approach, after I realized that winmm has midiConnect and midiDisconnect functions which essentially connect any midi input to any midi output at lower level. It works without any program intervention and the need of additional software processing. This of course may not be the best approach because this is very dependent on the midi interface drivers and Ive seen many badly/incomplete implemented products which work under some circumstances and do not work under other, or lacking SysEx transaparency, etc, like an unpredictable blackbox which could respond unpredictably under certain conditions... so I definitely would like to be able to imeplement the first option as well, where I could have total control of what goes thru and what not.

    If anyone have any tips I'd greatly appreciate it.

    This is what my callback looks like, its located within the main namespace of my project but outside of any classes:

    void CALLBACK MidiInProc(HMIDIIN hMidiIn, UINT wMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2)
    {
    switch (wMsg)
    {
    case MIM_OPEN: { midi_device_open = 1; break; }
    case MIM_CLOSE: { midi_device_open = 0; break; }
    case MIM_DATA:
    ....
    .... etc
    }

    This is my main form class and the autoresetevent which is part of it:

    public ref class Form1 : public System::Windows::Forms::Form
    {
    public: AutoResetEvent^ ar1 = gcnew AutoResetEvent(false);
    public:
    Form1(void) {....}
    }

    If I try to place the creation of ar1 before the callback in global space so that the callback could see it, i get an error "global or static variable may not have managed system type". So I cant figure out how to make the ar1 visible to the callback function. Neither I could get access to the Form1 class from within the callback, so another closed door from this side also.

    Regards,
    Konstantin

    Quote Originally Posted by Eri523 View Post
    You are not showing any of your code, so I need to guess to some degree.

    It sounds like you are forwarding every single MIDI event in an individual Windows message. With lots of incoming events, this imposes a heavy load on your receiving message loop and the Windows message dispatching system. It may be better to buffer incoming events in your receiver callback until a certain count of events is reached or a timeout of a few milliseconds elapses, and forward them bundled, multiple events in a single Windows message. This could drastically reduce the system load caused by MIDI event handling.

    Yes, this would mean intentionally introducing some extra latency, which is generally unwanted in a live MIDI processing context. But it may be possible to reduce or even effectively hide that latency by some algorithmic refinement. Also, the extra latency may be less relevant if transporting MIDI events in Windows messages is just relevant for your program's GUI and the critical processing is done off that path (which is a good idea anyway). I already have some ideas in mind in that context, but don't want to go into too much detail, as long as I still might be on the wrong track entirely.
    Last edited by 2kaud; March 19th, 2018 at 04:12 AM.

  6. #6
    Join Date
    Jun 2010
    Location
    Germany
    Posts
    2,675

    Re: Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    Well, I'm afraid the interesting parts of your MidiInProc() are hidden behind your "... etc". The two cases shown just do assignments of an integer value to which looks like a global variable, and that is so much not performance critical...

    To gain access to your event from within MidiInProc(), instead of ManualResetEvent use its base class EventWaitHandle to create a named event. Use the native Win32 API function CreateEvent() to create an event with the same name. Because of the identical name, the underlying system event is also identical. The return type of CreateEvent() is the native HANDLE, which can be assigned to a global variable. (If MidiInProc() is the only place in your native code where you need access to the event, it can as well be a static local variable to that function, which I would prefer in this case.) It should be uncritical to do native event creation in the variable initialization. If for some reason you decide not to do so, initialize the variable to NULL and check for that before event creation and assignment; otherwise you'd pretty likely get a nice resource leak.

    Named events are system-global, and taking them solely for intra-process use is a bit of hackish overkill. However, the "by the book" alternative is deriving your own event class from SafeWaitHandle, which is considerably more complex. See the class documentation for a related code sample. (The sample code seems to not be available in C++/CLI, but the C# version should not be too difficult to understand.)
    I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.

    This is a snakeskin jacket! And for me it's a symbol of my individuality, and my belief... in personal freedom.

  7. #7
    Join Date
    Mar 2018
    Posts
    5

    Re: Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    Thanks a lot for the answer and the advice! Ill look into that!

    Here comes a bit more of the callback code, which btw is for the first midi input which handles data coming from the hardware synthesizer, so I just throw messages to the wndproc where I decode them along with my own "type of midi message" variables..

    Code:
    	void CALLBACK MidiInProc(HMIDIIN hMidiIn, UINT wMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2)
    	{
    		switch (wMsg)
    		{
    		case MIM_OPEN: { midi_device_open = 1; break; }
    		case MIM_CLOSE: { midi_device_open = 0; break; }
    		case MIM_DATA: 
    		{
    			midi_in_byte[0] = (unsigned char)(dwParam1 & 0xFF);
    			if (midi_in_byte[0] == (0xB0 | MIDI_OUT_CHANNEL) || midi_in_byte[0] == (0x90 | MIDI_OUT_CHANNEL) || midi_in_byte[0] == (0x80 | MIDI_OUT_CHANNEL))
    			{
    				midi_in_message[0] = (unsigned char)(dwParam1 & 0xFF);
    				midi_in_message[1] = (unsigned char)((dwParam1 >> 8) & 0xFF);
    				midi_in_message[2] = (unsigned char)((dwParam1 >> 16) & 0xFF);
    				messageready = 1;
    				::SendMessage((HWND)Application::OpenForms[0]->Handle.ToInt32(), WM_USER, 0, 0);
    			}
                     }
                     ..... etc, same kind of structure for the midi long data to handle System Exclusive messages
               }
         }
    and later on:

    Code:
    	protected: virtual void WndProc(Message% m) override
    	{
    			// Listen for operating system messages.
    			switch (m.Msg)
    			{
    			case WM_USER:
    			{
                                  if(messageready)
                                  {
                                       messageready = 0;
                                       update_user_interface_based_on_received_data();
                                   }
                             }
                      }
     }

    Thanks again!
    Regards,
    Konstantin

    Quote Originally Posted by Eri523 View Post
    Well, I'm afraid the interesting parts of your MidiInProc() are hidden behind your "... etc". The two cases shown just do assignments of an integer value to which looks like a global variable, and that is so much not performance critical...

    To gain access to your event from within MidiInProc(), instead of ManualResetEvent use its base class EventWaitHandle to create a named event. Use the native Win32 API function CreateEvent() to create an event with the same name. Because of the identical name, the underlying system event is also identical. The return type of CreateEvent() is the native HANDLE, which can be assigned to a global variable. (If MidiInProc() is the only place in your native code where you need access to the event, it can as well be a static local variable to that function, which I would prefer in this case.) It should be uncritical to do native event creation in the variable initialization. If for some reason you decide not to do so, initialize the variable to NULL and check for that before event creation and assignment; otherwise you'd pretty likely get a nice resource leak.

    Named events are system-global, and taking them solely for intra-process use is a bit of hackish overkill. However, the "by the book" alternative is deriving your own event class from SafeWaitHandle, which is considerably more complex. See the class documentation for a related code sample. (The sample code seems to not be available in C++/CLI, but the C# version should not be too difficult to understand.)

  8. #8
    Join Date
    Jun 2010
    Location
    Germany
    Posts
    2,675

    Re: Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    Ok, this excerpt from MidiInProc() is a bit more informative, although it still doesn't enable me to make any conclusions about the (potential) frequency of the specific MIDI events in question. One important information is that you actually do send one Windows message per MIDI event, as I already suspected. I don't see how you convey the actual MIDI event when sending the message, at least not in the wParam and lParam message parameters. But in fact all that may be mostly irrelevant regarding the actual problem, because...

    Don't use message numbers from the WM_USER range for this kind of private intra-app messaging. The name is a bit misleading. Use numbers from the WM_APP range instead. This passage from MSDN gives a hint:

    Quote Originally Posted by MSDN
    Message numbers in the [WM_USER range] can be defined and used by an application to send messages within a private window class. These values cannot be used to define messages that are meaningful throughout an application because some predefined window classes already define values in this range. For example, predefined control classes such as BUTTON, EDIT, LISTBOX, and COMBOBOX may use these values. [...]
    I made the same mistake myself about two decades ago, resulting in a drastically misbehaving app. In debugging I found out that my application window, in some frequent and common situations, got flooded with unexpected WM_USER messages, that actually were meant to be dispatched to some standard Windows controls placed on the application window. You already mentioned performance problems with your app, specifically related to user GUI interactions, which may be a strong hint in that direction.

    Changing from the WM_USER to the WM_APP range may not solve all your performance issues, but it may already lead to a significant mitigation.
    I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.

    This is a snakeskin jacket! And for me it's a symbol of my individuality, and my belief... in personal freedom.

  9. #9
    Join Date
    Mar 2018
    Posts
    5

    Re: Receiving MIDI on 2 interfaces, merging and outputing on MIDI output

    Taking note of your advice!

    Let me explain a little further what all this thing consist in.. I include a picture of the device I designed and the windows application for controlling it.

    Name:  synth controller.jpg
Views: 521
Size:  58.2 KBName:  V1LP.jpg
Views: 319
Size:  39.2 KB

    The synthesizer is a dual mode device, which have a monophonic mode with a step sequencer and a polyphonic with 8 oscillators. Both modes have around 128 independent controllable parameters. Some of them accessible through the physical knobs, and others hidden and only controllable by MIDI controller numbers.
    There is a fuller description at this link:

    https://www.gearslutz.com/board/elec...nthesizer.html

    Although its not very updated as I have added a whole bunch of features after that initial description.
    By default, for controlling the hidden parameters on the synth by MIDI, I started developing the windows application so that I could have all of the parameters on screen and have the ease of debugging the firmware of the hardware having all the controls handy. So the basic way of using this pair of hardware software, is to use a good brand midi interface which supports seamlessly all the standard MIDI protocol messages. The synth is connected to the computer in a bidirectional connection with 2 midi cables. For this I am using this main midi callback which handles the data received from the synth, exclusively. The midi out of the pc is hooked to the midi in of the synth. The slider controls, checkboxes, etc, all generate midi controller data and send it to the midi output.
    The synthesizer features a preset storage system which allows to store up to 122 different configurations of all parameters (128) separately on both modes, mono and poly (244 presets in total), which are stored on an onboard eeprom memory chip. I included also a system to be able to download all presets from the eeprom to a computer file and viceversa, loading a presets file into the onboard nonvolatile memory. For this communication I am using also the system exclusive messages of the MIDI standard.
    The midi in callback which i pasted parts of earlier handles all this messaging between the synth and the windows application.

    For making the synth produce sound, specially in the polyphonic mode, it needs to receive midi note messages which I can accomplish seamlessly using a good brand midi interface which supports all of the midi standard and a well developed driver - same hardware interface which I used to develop the application. I open the same midi port in the music making program and send midi note data there. Same port can be opened in my windows application and it can communicate with the synth "simultaneously" through the same port.. i figure the driver is in charge of scheduling the order of messages to the output as they are filled in in their buffer.

    The issue I was trying to make a workaround for is for less capable midi interfaces and I happened to come around one. It uses default windows usb drivers and does not allow opening of the same port in different applications at the same time. I dont know if this is due to the particular interface or thats how the standard windows usb midi drivers work. This interface also seem to be not so transparent with handling SysEx data.
    Thus all the prior ideas of making this work by implementing a second midi input in the program and routing the midi note data from the music making app to the synth through my application.
    I realize that this is by far not the most user friendly approach, neither efficient. The best solution would be to implement a composite USB device in the synthesizer, this way I could send midi data from the music app and handle controlling data between my app and the synth in a much more independent manner, but I have yet to get to that part.

    A bit long of a post, but I hope the panorama is clearer now.
    Thanks again for the advice about the messaging!

    Konstantin



    Quote Originally Posted by Eri523 View Post
    Ok, this excerpt from MidiInProc() is a bit more informative, although it still doesn't enable me to make any conclusions about the (potential) frequency of the specific MIDI events in question. One important information is that you actually do send one Windows message per MIDI event, as I already suspected. I don't see how you convey the actual MIDI event when sending the message, at least not in the wParam and lParam message parameters. But in fact all that may be mostly irrelevant regarding the actual problem, because...

    Don't use message numbers from the WM_USER range for this kind of private intra-app messaging. The name is a bit misleading. Use numbers from the WM_APP range instead. This passage from MSDN gives a hint:



    I made the same mistake myself about two decades ago, resulting in a drastically misbehaving app. In debugging I found out that my application window, in some frequent and common situations, got flooded with unexpected WM_USER messages, that actually were meant to be dispatched to some standard Windows controls placed on the application window. You already mentioned performance problems with your app, specifically related to user GUI interactions, which may be a strong hint in that direction.

    Changing from the WM_USER to the WM_APP range may not solve all your performance issues, but it may already lead to a significant mitigation.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured