Java Understanding ByteArrayOutputStream and ByteArrayInputStream
I was reading this article: http://www.java-tips.org/java-se-tips-100019/120-javax-sound/917-capturing-audio-with-java-sound-api.html I don't want to write all Code from before article...
I need to clarify my understand and I need explanation about uses of ByteArrayInputStream and ByteArrayOutputStream...
Based on Complete code:
In the captureAudio() method focusing on the loop while
while (running) {
int count = line.read(buffer, 0, buffer.length);
if (count > 0) {
out.write(buffer, 0, count);
}
}
out.close();
By definition (check the line 64 and 65):
final TargetDataLine line = (TargetDataLine)
AudioSystem.getLine(info);
on the line 79: the Line is the Microphone and // Reads audio data from the data line's input buffer. In other words, the bytes incoming from microphone are locate or stored in bytes buffer
.
On the line 81:
out.write(buffer, 0, count);
out
is a ByteArrayOutputStream object..
The ByteArrayOutputStream class of the Java IO API allows you to capture data written to a stream in an array. You write your data to the ByteArrayOutputStream and when you are done you call the ByteArrayOutputStream's method toByteArray() to obtain all the written data in a byte array. The buffer automatically grows as data is written to it.
In my words: ByteArrayOutputStream
will be growing taking the bytes from buffer defined in quantity count
On the other side:
In the playAudio() method.
I can see that the first line (line 101 of complete code) all bytes are taken!!!
byte audio[] = out.toByteArray();
https://docs.oracle.com/javase/7/docs/api/java/io/ByteArrayOutputStream.html#toByteArray()
Creates a newly allocated byte array. Its size is the current size of this output stream and the valid contents of the buffer have been copied into it.
Now in the lines (102 and 103)
InputStream input =
new ByteArrayInputStream(audio);
On the lines (105 until 107 ) the bytes are passed throught:
final AudioInputStream ais =
new AudioInputStream(input, format,
audio.length / format.getFrameSize());
focusing in the while loop and near lines
int count;
while ((count = ais.read(
buffer, 0, buffer.length)) != -1) {
if (count > 0) {
line.write(buffer, 0, count);
}
}
line.drain();
line.close();
The bytes are taken from ais
And the line (lines 110 and 111) is representing the speakers
final SourceDataLine line = (SourceDataLine)
AudioSystem.getLine(info);
The Question 1 is:
out
, from captureAudio method, will be taking bytes infinitely, but How input
, from playAudio method, takes exactly the bytes required to sound them consistently?
Remember: out.toByteArray();
take all bytes, but speakers not sound repeatedly the same bytes...
The Question 2 is:
Can I to handle this situation read from microphone(TargetDataLine) and write to speaker(SourceDataLine) without using these two objects (ByteArrayOutputStream and ByteArrayInputStream) like related article?
Like next code:
while (running) {
int count = microphone.read(buffer, 0, buffer.length);
if (count > 0) {
speaker.write(buffer, 0, count);
}
}
speaker.drain();
speaker.close();
The Question 3 is:
How I can implement a Repeater (Capture sound from the microphone and play it on the speakers, infinitely, 1 or 2 hours)?
Note: Without worrying about storage problems bytes in memory (without storing on file) without playback delays.
I am not familiar with the Sound API.
But, there's no particular reason why your last code snippet should not work assuming that the input can be endlessly read, while the output can be endlessly fed. The only issue is whether one end or the other "stalls" (here my lack on knowledge of the Sound API comes in to play).
If the output side stalls for some reason, then the input side may potentially overflow some internal buffer, thus losing information. If the input stalls, it's not so much an issue. I don't know if this is an actual issue with the Sound API. In contrast, if there were two threads happening (or using Async I/O), one managing the input, and one managing the output, the input side would give your program the opportunity to cache incoming data using your semantics, rather than the APIs semantics, while the output channel is stalled.
The issues with the ByteArrayStreams are simply mechanisms to populate and every expanding byte array, without managing it yourself, and, similarly, adding stream semantics to an underlying byte array (which has all sorts of useful capabilities).
链接地址: http://www.djcxy.com/p/72778.html