There is often a lot of discussion about the most efficient way to convert things in C++, personally I like the boost::lexical_cast, I find it gives clear and readable code; very important in the systems I write, especially for maintenance and up keep.
However, many assume it to be slow, indeed most authors on the topic turn almost immediately to C for the fastest way to convert integers to strings, and unfortunately I find the same is true, even with std::string and std::ostringstream features the old sprintf tends to be faster.
But, crippling many users of std::string is their lack of understanding of that standard library staple class, so here is my little investigation into the speed of conversion, using a mix of C and standard C++, to give you a good idea of how fast things can be, and how to use your standard class and its memory to best effect.
The first trick you will see in this is the use of the std::string as a memory buffer, you can do this by declaring your standard string, then resizing it...
std::string MyName;
MyName.resize(12);
The memory location now at &MyName[0] points to 12 empty characters for you to use just as you would a char* or &char[], useful to stop using uncontrolled char* buffers left right and center, and perhaps the least used "tip" I can give when using std::string.
So, what conversions do we have?...
class Conversions
{
public:
static const std::string IntToString(const int& p_Input);
static const std::string IntToString2(const int& p_Input);
static const std::string IntToString3(const int& p_Input);
static const char* IntToString4(const int& p_Input);
static const void TestConversions (const int& p_Cycles);
};
The first three are going to be using whatever code to always give a pre-allocated std::string, the fourth conversion is going to return a raw char*, so the programmer has to delete the result etc, or rick a memory leak.
I still want that fourth result to be a std::string however, and I'm not going to worry about the leaking memory, so though the function performs a new[] I will not perform a delete[], but simply will assign the char* to a new std::string upon return. Making the timings taken fairer.
So, lets look at the program code for each of the functions, and the timing in the test function. I'm going to use boost::posix_time for the timings here:
const std::string Conversions::IntToString(const int& p_Input)
{
std::string l_buffer;
l_buffer.resize(33);
sprintf(&l_buffer[0], "%d", p_Input);
return l_buffer;
}
Above we can see the first function, using our string resizing tip, we resize the string as a buffer to take up to 33 characters (the maximum size for a 32bit integer is 33 characters) and then we use the old fashioned sprintf from the cstdlib header. Many claim this to be the fastest, the defacto conversion, even more so than using itoa.
const std::string Conversions::IntToString2(const int& p_Input)
{
std::ostringstream l_oss;
l_oss << p_Input;
return l_oss.str();
}
Next is the standard library way of working, we create a new ostringstream and stream the integer into it, then return the std::string from the stream. I believe this to be pretty slow, my mind tells me that the creation of the stream and then the extraction of the result is going to be slow; but we shall see in a moment.
const std::string Conversions::IntToString3(const int& p_Input)
{
char l_buffer[33];
sprintf(l_buffer, "%d", p_Input);
return std::string (l_buffer);
}
Next we have another use of sprintf, however, this time we're not resizing a string natively, we're creating a character string and then casting it back as a result. I think this may be the fastest, but again we shall see.
const char* Conversions::IntToString4(const int& p_Input)
{
char* l_buffer = new char[33];
sprintf(l_buffer, "%d", p_Input);
return l_buffer;
}
Finally, very similar to the third test, this conversion creates a new character array pointer as the buffer, and then uses sprintf. This is going to leak memory if we don't delete[], but I'm ignoring that for now and just testing the speed.
Now onto the conversion test function, the basic layout is, take the start time before the conversion, call the conversion a lot of times assigning the result to a std::string locally, and then take the time after and output the difference in milliseconds...
Now, the speed of this code in C++ is going to be fast in all cases, so we need enough sample calls to get a reading... I'm settling on 30 million calls, 30000000.
const void Conversions::TestConversions (const int& p_Cycles)
{
// Test 1
std::cout << "Test 1...";
boost::posix_time::ptime l_end;
boost::posix_time::ptime l_start (boost::posix_time::second_clock::local_time());
std::string l_result;
for (int i = 0; i < p_Cycles; ++i)
{
l_result = IntToString(i);
}
l_end = boost::posix_time::second_clock::local_time();
boost::posix_time::time_duration l_diff = l_end - l_start;
std::cout << l_diff.total_milliseconds() << std::endl;
// Test 2
std::cout << "Test 2...";
l_start = boost::posix_time::second_clock::local_time();
for (int i = 0; i < p_Cycles; ++i)
{
l_result = IntToString2(i);
}
l_end = boost::posix_time::second_clock::local_time();
l_diff = l_end - l_start;
std::cout << l_diff.total_milliseconds() << std::endl;
// Test 3
std::cout << "Test 3...";
l_start = boost::posix_time::second_clock::local_time();
for (int i = 0; i < p_Cycles; ++i)
{
l_result = IntToString3(i);
}
l_end = boost::posix_time::second_clock::local_time();
l_diff = l_end - l_start;
std::cout << l_diff.total_milliseconds() << std::endl;
// Test 4
std::cout << "Test 4...";
l_start = boost::posix_time::second_clock::local_time();
for (int i = 0; i < p_Cycles; ++i)
{
l_result = IntToString4(i);
}
l_end = boost::posix_time::second_clock::local_time();
l_diff = l_end - l_start;
std::cout << l_diff.total_milliseconds() << std::endl;
}
Lets see what our output is:
Test1...6000
Test2...17000
Test3...5000
Test4...6000
Immediately we can see my hunch about using the string stream is correct, its very much slower, more than twice as slow.
Surprisingly, at least to most readers - one hopes - we can see that the string::resize and use of sprintf is very close to the other uses of sprintf.
Since sprintf should be taking a constant amount of time what we've timed in tests 1, 3 and 4 is the speed of our memory management, how quickly has the function made the result available.
Some readers maybe screaming at me to use itoa, and one could do that with a char buffer, or even a resized string, thus:
std::string buffer;
buffer.resize(33);
itoa (&buffer[0], p_Input, 10);
However, itoa is not a standard function and some compilers don't supply it, therefore you must generally always use the lowest common denominator and sprintf is just that.
There is also one last avenue, the lexical cast...
// Test 5
std::cout << "Test 5...";
l_start = boost::posix_time::second_clock::local_time();
for (int i = 0; i < p_Cycles; ++i)
{
l_result = boost::lexical_cast<int>(i);
}
l_end = boost::posix_time::second_clock::local_time();
l_diff = l_end - l_start;
std::cout << l_diff.total_milliseconds() << std::endl;
Now, this approach should take into account the "bad_lexical_cast" exception, but exception handling is slow so we're ignoring that at this juncture. And assuming we have a known data source (int i) which we have in a valid range.
Our test results how are similar for the original calls...
Test 1...6000
Test 2...17000
Test 3...5000
Test 4...5850
The new fifth test...
Test 5...1000
This is very much quicker than any of the other solutions proposed...
So, there you go...
#include <boost/lexical_cast.hpp>
#include <boost/date_time/local_time/local_time.hpp>
#include <string>
#include <iostream>
int main ()
{
// Test 5
std::cout << "Test 5...";
boost::posix_time::ptime l_start =
boost::posix_time::second_clock::local_time();
for (int i = 0; i < p_Cycles; ++i)
{
l_result = boost::lexical_cast<int>(i);
}
boost::posix_time::ptime l_end =
boost::posix_time::second_clock::local_time();
boost::posix_time::time_duration l_diff = l_end - l_start;
std::cout << l_diff.total_milliseconds() << std::endl;
return 0;
}
Use lexical cast...
And about the exception handling, be smart the speed of this code is not affected by making the function throw the exception up, or by making the whole loop handle the exception once, be smart... and stop mucking about with conversions in C, you're only fooling yourself they're faster than other options.
No comments:
Post a Comment