Machine Learning Research Engineer

March 13, 2023
Offerd Salary:N/A
Location:201 Borough High Street, UK
Working address:201 Borough High Street, UK
Contract Type:Permanent
Working Time:Full-time position
Working type:Mix
Ref info:N/A
We're conducting interviews on a rolling basis for this role and applications will close by the 16th of December.
As a machine learning research engineer at Conjecture, you will ideally have a breadth of knowledge spanning from solving practical engineering problems and building SOTA deep learning models, to proposing and writing high quality research papers with a team, to reasoning about the merits and downsides of different approaches to AI alignment. 
We believe that making meaningful progress in aligning AI systems requires strong and constant “contact with reality.” In practice, what this means is that our research emphasises empirical results, engineering-oriented solutions, and quick iteration cycles to rapidly collect bits of evidence. Research engineers here regularly build infrastructure to solve problems they encounter, and have developed a suite of tools that greatly accelerate our AI safety research. 
At Conjecture you will likely end up contributing to one of our prosaic alignment projects, conducting interpretability work, automating alignment research, or training new models and further developing our infrastructure and tooling. You will help improve our core infrastructure, improve inference and build speed and improve our tooling capabilities.

List of responsibilities may include:


- Working with other alignment researchers to propose, run, analyse, and visualise experiments.

- Working on large-scale ML frameworks that train models in parallel across many machines.

- Implementing new models or optimisation techniques from research papers.

- Building large-scale datasets.

- Building internal tooling and infrastructure for model inference, visualisation, and interpretability.

- Implementing new models or techniques from research papers.

- Doing exploratory mechanistic interpretability research


You might be a good fit for this role if:


- You have a understanding of performance in HPC workloads, have worked with large GPU clusters, and ideally with some modern ML frameworks (e.g., PyTorch, Jax)

- You are proactive, driven, and creative. People with interesting past research and with github profiles full of open-source contributions stand out to us.

- You are able to solve both small, isolated problems like bugs in code, as well as grapple with large meta-level problems, such as epistemic strategies and research agendas.

- You have a broad knowledge of topics related (even tangentially!) to machine learning and alignment - e.g computer science, information theory, statistics, philosophy, neuroscience.

- You are good at collaboration and teamwork - many of our projects are large engineering efforts that involve most or all of the team.

- You care about the impact of your work on the longterm future of humanity and creating safe and beneficial AI.


Experience with the following would be a bonus:


- Large scale distributed computing + machine learning systems.

- Large models that need to be parallelized to fit in memory (think tensor / data / pipeline parallelism or surrounding techniques) including frameworks such as Deepspeed

- Deep expertise with machine learning frameworks (PyTorch, Jax etc)

- Academic publications in fields related (even tangentially) to AI safety and machine learning.

- University-level physics, mathematics, computer science or computational neuroscience.

- Non language-modelling aspects of ML such as reinforcement learning, bayesian graphical models, statistics etc.

From this employer

Recent blogs

Recent news