"The Chinese Room" thought experiment is a famous argument by philosopher John Searle against the possibility of "strong" (conscious) AI.
It goes something like this:
You are locked inside a room which is part of a system that passes the Turing test in Chinese (ie. some Chinese guy is chatting with you via text thinking you're a genuine Chinese speaker).
But in fact you're just using a set of (presumably unimaginably complex) rules to process the Chinese letters and output a coherent response. You don't understand any of the Chinese, therefore there is no understanding of Chinese and no consciousness in the Chinese room.
I think the rebuttal is pretty simple: the system itself does "understand" Chinese. And this "understanding" would be indistinguishable from human understanding in every practical sense.
Related comic strip:
For more see:https://en.wikipedia.org/wiki/Chinese_room