The OpenSSL library allows to read from an underlying socket with SSL_read and write to it with SSL_write. These functions maybe return with SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE depending on their ssl protocol needs (for example when renegotiating a connection).
I don't really understand what the API wants me to do with these results.
Imaging a server app that accepts client connections, sets up a new ssl session, makes the underlying socket non-blocking and then adds the filedescriptor to a select/poll/epoll loop.
If a client sends data, the main loop will dispatch this to a ssl_read. What has to be done here if SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE is returned? WANT_READ might be easy, because the next main loop iteration could just lead to another ssl_read. But if the ssl_read return WANT_WRITE, with what parameters should it be called? And why doesn't the library issue the call itself?
If the server wants to send a client some data, it will use ssl_write. Again, what is to be done if WANT_READ or WANT_WRITE are returned? Can the WANT_WRITE be answered by repeating the very same call that just was invoked? And if WANT_READ is returned, should one return to the main loop and let the select/poll/epoll take care of this? But what about the message that should be written in the first place?
Or should the read be done right after the failed write? Then, what protects against reading bytes from the application protocol and then having to deal with it somewhere in the outskirts of the app, when the real parser sits in the mainloop?