Recent advances in emerging technologies such as monolithic 3D Integration (M3D-IC) and emerging non-volatile memory (eNVM) have enabled to embed logic operations in memory. This alleviates the "memory wall" challenges stemming from the time and power expended on migrating data in conventional Von Neumann computing paradigms. We propose a M3D SRAM dot-product engine for compute in-SRAM support used in applications such as matrix multiplication and artificial neural networks. In addition, we propose a novel computing in RRAM-based memory architecture to efficiently solve the computation intensity of sparse dot products. Specifically, the index assessment of sparse matrix-vector multiplication used in support vector machines (SVM). At maximum throughput, our proposed RRAM architecture achieves 11.3× speed up when compared against a near-memory accelerator.