For awhile now lately, we have been seeing Elastic Search indexing
quickly fall behind as some log files generated in the gate have become
larger. Currently, we download a full log file into memory and then
emit it line-by-line to be received by a logstash listener. When log
files are large (example: 40M) logstash gets bogged down processing
them.
Instead of downloading full files into memory, we can stream the files
and emit their lines on-the-fly to try to alleviate load on the log
processor.
This:
* Replaces use of urllib2.urlopen with requests with stream=True
* Removes manual decoding of gzip and deflate compression
formats as these are decoded automatically by requests.iter_lines
* Removes unrelated unused imports
* Removes an unused arg 'retry' from the log retrieval method
Change-Id: I6d32036566834da75f3a73f2d086475ef3431165