
Job title: Software Engineer, Trust & Safety – USDS
Company: TikTok
Job description: We are seeking a talented Software Engineer to join our embedded Trust and Safety team, focusing on proactive risk detection. You’ll be a key contributor to our mission, developing impactful solutions that address high-harm, high-risk issues where traditional detection methods may fall short.
As a Software Engineer on the Risk and Response Detection and Automation team, you will contribute to the full lifecycle of our tools, from early concepts to production deployment. You’ll play a vital role in our Detection Engineering program, helping to build and scale systems that identify high-harm content and behavioral leakage on the platform. This involves collaborating closely with cross-functional Trust and Safety and Security teams on threat initiatives, refining tools to minimize false positives, and developing automation technologies to streamline critical processes. You’ll have opportunities to build a range of solutions, from intuitive threat-intelligence platforms and workflow engines to knowledge discovery aids powered by Large Language Models (LLMs) and Retrieval Augmented Generation (RAG).The department is tasked with managing and executing highly sensitive and high-stakes workflows, often involving objectionable or disturbing content, including but not limited to bullying, hate speech, child abuse, sexual assault, and violent crimes.In order to enhance collaboration and cross-functional partnerships, among other things, at this time, our organization follows a hybrid work schedule that requires employees to work in the office 3 days a week, or as directed by their manager/department. We regularly review our hybrid work model, and the specific requirements may change at any time.This position will be full-time and will be based in Bellevue, WA; Los Angeles, CA; San Jose, CA; or New York, NY.Responsibilities:
– Build and implement compelling and usable tools with technologies like React, Python, and Golang to improve consistency and efficiency across US Trust & Safety.
– Collaborate directly with users to inform, refine, and validate concepts, translating user needs into robust and scalable features from MVP to production.
– Develop and deploy lightweight models in Python and SQL for our Detection Engineering program, helping to catch high-harm content and behavioral leakage at scale.
– Contribute to the exploration and application of AI/LLM technologies to enhance our proactive detection capabilities and reduce the technical lift of current bespoke systems.
– Participate actively in team events, including hackathons, monthly product demos, and quarterly on-sites.
– Analyze metrics and derive insights to continuously raise the effectiveness of our tooling and detection solutions.Qualifications:Minimum Qualifications:
– A preferred minimum 2 years of experience building and contributing to applications, with a solid understanding of the systems you’ve worked on.
– Fluency in Python, and proficiency in other modern programming languages (e.g., Go, JavaScript) is a plus. Proficiency in relational database technologies (e.g., MySQL) and NoSQL database technologies (e.g., MongoDB).
– Prior experience in Frontend development, Site Reliability Engineering (SRE), Data Engineering, or Machine Learning.
– Strong foundational knowledge in software engineering principles, data structures, and algorithms.
– Curiosity and eagerness to learn about Large Language Models (LLMs) and Artificial Intelligence (AI), and how they can be applied to build more predictive and intelligent systems.
– A proactive approach to problem-solving, with the ability to work effectively both independently and collaboratively in a fast-paced environment.
– Working knowledge of cloud platforms (e.g., OCI, GCP, AWS) and associated technologies (e.g., Kubernetes, networking).Preferred Qualifications:
– Strong problem-solving skills and attention to detail. Excellent communication and interpersonal skills, capable of working effectively with both technical and non-technical stakeholders.
– Ability to handle confidential information with discretion.
– Experience working on Trust & Safety, detection, security, or integrity systems
– Familiarity with emerging threat models, adversarial behaviors, or large-scale abuse mitigation
– Exposure to ML or LLM-powered internal tools (e.g., RAG pipelines, Model Context Protocol (MCP) Servers)
– Prior work in emotionally sensitive domains or content moderation support functions
– Your ability to work in a high tempo environment, adapt, respond to day-to-day challenges of the role.
– Your resilience and commitment to self-care to manage the emotional demands of the role.
Expected salary:
Location: San Jose, CA
Job date: Wed, 27 Aug 2025 22:58:59 GMT
Apply for the job now!