User engagement in online public discourse often includes self-disclosure - the revelation of personal information. Such disclosures on online public platforms (e.g., news forums) become a shared history, vulnerable to detrimental use by advertisers and malicious parties. Yet, users indulge in self-disclosing behavior to attain strategic goals like relational development, social connectedness, identity clarification, and social control. In this work, we develop supervised models to detect instances of self-disclosure in users' comments in the context of public discourse. Using three different datasets, we validate the performance of our models. Our detection models achieve an accuracy of 75.8 percent in a news discourse dataset. The performances on evaluation against existing methods on two secondary datasets are on par if not better. We examine the rate at which users self-disclose to understand when and to what extent users abide by group norms of such behavior. Our results show that self-disclosing users are often similar in their alignment or divergence with the group norm. As such, these similarly divergent users in a conversation use similar language in their disclosures. Finally, we reflect o n t he i mplications o f alignment with or divergence from group norms in light of online privacy.