Multiple endpoints?

Jul 14, 2009 at 6:16 PM

I'm writing a solution where we want to make good choices for high scalability in the future. In my business logic I will both receive messages and then synchronously send new messages to a database service, and wait for the reply. I've tried to wire up two endpoints, one for receiving the messages that are destined for the business logic and another one for sending and (primarily) receive the replies back from the database service.

Simple illustration:

Client (UI) - sends a request to the BusinessLogic


BusinessLogic - process message, but need to bring something in from the database so it sends a new message type to the database service queue


Database service - retrieves data from a database and sends a reply back


My thinking was that if I have several messages "on queue" for the business logic then I'd like those to remain in the queue until my business logic is ready to process them (NumberOfWorkerThreads). But if I have several messages in the queue then I won't be able to use the same queue to receive my replies from the database service, right? So I figured I need a separate queue for those to not create deadlocks. I hope you're following my reasoning, but if there's a better way to accomplish this I'm happy to change.

Right now I'm trying to wire it up, but going thru the code it seems that I'm limited to using one transport/endpoint. I'm still trying to get it set up, but I wanted to start a discussion first if this is the right approach, and if you see anything in SSB that would prevent me from doing this.


Thanks for a great project!


Jul 20, 2009 at 4:16 PM

I definitely follow your logic. Currently, SSB is limited to once endpoint/transport per instance, but this is an unfortunatle limitation. I will be making modifications that should allow it to host multiple transports for different reasons, such as being able to have a single endpoint talk on MSMQ and the new .NET Service Bus, for instance. In your case though, I want to see if I understand your problem correctly. You will take a message off the queue, realize that you need a response from the database, send the request to the database and get a response, but because you're holding up the processing of the queue waiting for the response you can't actually receive the response?

Jul 20, 2009 at 5:22 PM

I hacked a way to have multiple endpoints/transport by adding a hook into my IObjectBuilder, that seems to work fine. Yes, the scenario is correct, I want to limit the number of worker threads on the main incoming transport (because I know roughly how much processing I can do in parallel) and also each main processing will probably spawn up to 10 database requests so in terms of number of messages I have more on the "database-transport" so I want to make sure I don't deadlock my processes.

Jul 20, 2009 at 5:33 PM

What was your IObjectBuilder hack? Anything that would be generally useful? As for your problem, you're approach is probably as good as any. Another approach would be not to block your queue while waiting for the db response, but to persist it when it reaches the wait state and move on to the next message, then when the DB response arrives to load the message from its persistence and continue processing. This is how NServiceBus works with it's Saga concept, and is probably more scalable in the long term, especially if the response time from the database can be variable, but if both processes perform in a relatively known way and you can get the throughput you want with two queues, then there isn't anything built into SSB that would make things more elegant.

Jul 20, 2009 at 9:36 PM

We're using Castle Windsor for our IoC so I'm not sure if the hack is useful for anybody else, but here's the meat of the hack:
public object Build(System.Type typeToBuild)
            Debug.Assert(ioc != null);

            if (typeToBuild == null)
                throw new ArgumentNullException("typeToBuild");

                if (typeToBuild == typeof(IMessageBus))
                    return ioc.Resolve("ssbMessageBus_" + endpointOverride, typeToBuild);
                if (typeToBuild == typeof(ITransport))
                    return ioc.Resolve("ssbTransport_" + endpointOverride, typeToBuild);

            return ioc.Resolve(typeToBuild);


Basically I create two instances of ObjectBuilder that is passed to CreateEndpoint that take this "override" name and then query the IoC for a named instance if the override is specified, otherwise it just goes by default. So I write everything up for the main "receiver-transport", then I create a new endpoint that I use for my "database-transport" by creating a new instance of IMessageBus and ITransport and write that in with the rest of the instances that are singleton.

Yes, to have it as a process that I can persist would also work, but I prefer that requests sit in the main queue until I'm ready to process. That way I can easily monitor how lagging the queue is and schedule resources based on that. Plus the fact that I don't have to implement anything more than this extra endpoint. Almost all of my requests are considered short-running so for me there would be a considerable overhead to persist/reload as sagas, but each implementation is different.



Jul 20, 2009 at 9:36 PM

Btw, I added two issues, I don't know if you get notifies from them but I figured I'd mention it.

Jul 21, 2009 at 4:13 AM


I'm working on an ActiveMQ transport implementation that will try to use all the features that are available in ActiveMQ, like multiple worker threads, multiple consumers on same or distinct queues, publish/subscribe (topics), text or binary transport, transactions, batch...

While exploring ActiveMQ I noticed an interesting feature that deals with the Request/reply pattern: ActiveMQ allows you create a temporary queue from a session that you will use to receive your reply, then you wait for a response on this temp queue for a specific timeout.

It has 2 main advantages:

- it prevents any locking mentioned by HakanL

- the fact that ActiveMQ binds the temporary queue to the session means that as soon as the session is closed, it will clean up those temp queues and avoid keeping dangling queues.

However, I've not implemented yet and not tested its performance...

I want to support this strategy instead of the correlation service of SSB. I don't know if this is something that could be generalized in SSB?

By the way, while exploring the code of SSB, I noticed a potential memory leak in the current implementation of the async result of a send:

when you wait for a response after your send for a specific timeout, it will register an object with the message Id in the correlation service. If a response arrives and matches the msg id using the CorrelationId attribute, then it is the expected response and the service will remove the association, but it won't do it if the timeout expires, which means that if messages are never replied memory will grow!


Jul 22, 2009 at 5:34 AM

Thanks for the heads up. I will look at cleaning up the resources on timeout. Sounds like an interesting project you are up to. I am not opposed to generalizing the request/response mechanism - did you have a general concept of the kind of API you would like to see?