Codex, a big language mannequin (LLM) skilled on a wide range of codebases, exceeds the earlier state-of-the-art in its capability to synthesize and generate code. Though Codex offers a plethora of advantages, fashions that will generate code on such scale have vital limitations, alignment issues, the potential to be misused, and the likelihood to extend the speed of progress in technical fields that will themselves have destabilizing impacts or have misuse potential. But such security impacts are usually not but recognized or stay to be explored. On this paper, we define a hazard evaluation framework constructed at OpenAI to uncover hazards or security dangers that the deployment of fashions like Codex could impose technically, socially, politically, and economically. The evaluation is knowledgeable by a novel analysis framework that determines the capability of superior code era methods in opposition to the complexity and expressivity of specification prompts, and their functionality to grasp and execute them relative to human skill.