This study investigates the use of self-supervised learning in surgical computer vision to overcome data annotation challenges. By leveraging diverse surgical datasets and pre-training models, the research explores different pre-training dataset combinations and their impact on downstream task performance in surgical recognition and safety assessment. The findings highlight significant performance improvements across various tasks, emphasizing the importance of dataset composition in self-supervised learning methodologies.