EleutherAI corrects misconceptions from a New York Times article regarding Yi-34B and Llama 2, explaining common language model training practices and architectural similarities among LLMs. Highlighting that the use of known technology does not imply indebtedness to Meta's Llama 2, they clarify that architectural innovations often predate models like Llama and Yi-34B are based on public research. Training and dataset curation remain pivotal for model differentiation.