This paper describes the experiments for the task on information extraction from news texts in Russian in a setting with a wide variety of types of entities and relations. We have adapted the SpERT model which uses the BERT network as a core for the joint extraction of entities and relations. The results obtained for the named entity recognition are quite good and comparable with the results on English datasets. However, the hypothesis that the joint extraction of entities and relations improves the quality of extraction has not been confirmed. The results showed that using a full context instead of a local context improves the quality of extraction of both entities and relations.