This paper presents a comparison of ELMO-based models. The comparison was performed on data in the Russian language for the task of named entity recognition (NER). The paper also discusses a comparison of the architectures based on the Simple Recurrent Unit (SRU) and Gated Recurrent Unit (GRU). All the models compared were trained on a corpus of news texts in the Russian language taken from the Wikinews resource and were assessed on the Russian subsets of the WikiNEuRal and MultiCoNER datasets. The datasets and original code are available at https://github.com/Abiks/distributed-semantic-models. The results obtained suggest that the SRU architecture is promising for solving the NER task and it provides a high training speed. Also, the quality of NER models based on uncontextualized char-based embeddings was found to be comparable with other models discussed herein. This highlights the advantage of using character-based embeddings as part of the RNN or a transformer-based model because of their robustness to typos. However, the architectural details of the char-based block need further research.