A new method to analyze and classify daily activities in personal audio recordings (PARs) is presented. The method employs speech activity detection (SAD) and speaker diarization systems to provide high level semantic segmentation of the audio file. Subsequently, a number of audio, speech and lexical features are computed in order to characterize events in daily audio streams. The features are selected to capture the statistical properties of conversations, topics and turn-taking behavior, which creates a classification space that allows us to capture the differences in interactions. The proposed system is evaluated on 9 days of data from Prof-Life-Log corpus, which contains naturalistic long duration audio recordings (each file is collected continuously and lasts between 8-to-16 hours). Our experimental results show that the proposed system achieves good classification accuracy on a difficult real-world dataset.