Cyberbullying is a major cyber issue that is common among adolescents. Recent reports show that more than one out of five students in the United States is a victim of cyberbullying. Majority of cyberbullying incidents occur on public social media platforms such as Twitter. Automated cyberbullying detection methods can help prevent cyberbullying before the harm is done on the victim. In this study, we analyze a corpus of cyberbullying Tweets to construct an automated detection model. Our method emphasizes on the two claims that are supported by our results. First, despite other approaches that assume that cyberbullying instances use vulgar or profane words, we show that they do not necessarily contain negative words. Second, we highlight the importance of context and the characteristics of actors involved and their position in the network structure in detecting cyberbullying rather than only considering the textual content in our analysis.