Web browsing has never been easy for blind people, primarily due to the serial press-and-listen interaction mode of screen readers - their "go-to" assistive technology. Even simple navigational browsing actions on a page require a multitude of shortcuts. Auto-suggesting the next browsing action has the potential to assist blind users in swiftly completing various tasks with minimal effort. Extant auto-suggest feature in web pages is limited to filling form fields; in this paper, we generalize it to any web screen-reading browsing action, e.g., navigation, selection, etc. Towards that, we introduce SuggestOmatic, a personalized and scalable unsupervised approach for predicting the most likely next browsing action of the user, and proactively suggesting it to the user so that the user can avoid pressing a lot of shortcuts to complete that action. SuggestOmatic rests on two key ideas. First, it exploits the user's Action History to identify and suggest a small set of browsing actions that will, with high likelihood, contain an action which the user will want to do next, and the chosen action is executed automatically. Second, the Action History is represented as an abstract temporal sequence of operations over semanticweb entities called Logical Segments - a collection of related HTML elements, e.g., widgets, search results, menus, forms, etc.; this semantics-based abstract representation of browsing actions in the Action History makes SuggestOmatic scalable across websites, i.e., actions recorded in one website can be used to make suggestions for other similar websites. We also describe an interface that uses an off-the-shelf physical Dial as an input device that enables SuggestOmatic to work with any screen reader. The results of a user study with 12 blind participants indicate that SuggestOmatic can significantly reduce the browsing task times by as much as 29% when compared with a hand-crafted macro-based web automation solution.