We are interested in image manipulation via natural language text – a task that is extremely useful for multiple AI applications but requires complex reasoning over multi-modal spaces. Recent work on neuro-symbolic approaches has been quite effective in solving such tasks as they offer better modularity, interpretability, and generalizability. A noteworthy such approach is NSCL  developed for the task of Visual Question Answering (VQA). We extend NSCL for the image manipulation task and propose a solution referred to as NeuroSIM. Unlike previous works, which either require supervised data training or can only deal with very simple reasoning instructions over single object scenes; NeuroSIM can perform complex multi-hop reasoning over multi-object scenes and requires only weak supervision in the form of annotated data for the VQA task. On the language side, NeuroSIM contains neural modules that parse an instruction into a symbolic program that guides the manipulation. These programs are based on a Domain Specific Language (DSL) comprising object attributes as well as manipulation operations. On the perceptual side, NeuroSIM contains neural modules which first generate a scene graph of the input image and then change the scene graph representation in accordance with the parsed instruction. To train these modules, we design novel loss functions that are capable of testing the correctness of manipulated object and scene graph representations via query networks that are trained merely on the VQA dataset. An image decoder is trained to render the final image from the manipulated scene graph representation. The entire NeuroSIM pipeline is trained without any intermediate supervision. Extensive experiments demonstrate that our approach is highly competitive with state-of-the-art supervised baselines.