我有一个1600000推文的训练数据集.我该如何训练这类巨大的数据.
我尝试过使用nltk.NaiveBayesClassifier.如果我跑步,训练需要5天以上.
def extract_features(tweet): tweet_words = set(tweet) features = {} for word in featureList: features['contains(%s)' % word] = (word in tweet_words) return features training_set = nltk.classify.util.apply_features(extract_features,tweets) NBClassifier = nltk.NaiveBayesClassifier.train(training_set) # This takes lots of time
我该怎么办?
我需要使用SVM和朴素的bayes对我的数据集进行分类.
我想使用的数据集:Link
样本(培训数据集):
Label Tweet 0 url aww bummer you shoulda got david carr third day 4 thankyou for your reply are you coming england again anytime soon
示例(测试数据集):
Label Tweet 4 love lebron url 0 lebron beast but still cheering the til the end ^ I have to predict Label 0/4 only
如何有效地训练这个庞大的数据集?
解决方法
根据对特征提取的精心提出,您可以使用scikit库中的tfidvectorizer从推文中提取重要的单词.使用默认配置,再加上一个简单的LogisticRegression,它给我0.8精度.希望有所帮助.
以下是如何使用它来解决问题的示例:
以下是如何使用它来解决问题的示例:
train_df_raw = pd.read_csv('train.csv',header=None,names=['label','tweet']) test_df_raw = pd.read_csv('test.csv','tweet']) train_df_raw = train_df_raw[train_df_raw['tweet'].notnull()] test_df_raw = test_df_raw[test_df_raw['tweet'].notnull()] test_df_raw = test_df_raw[test_df_raw['label']!=2] y_train = [x if x==0 else 1 for x in train_df_raw['label'].tolist()] y_test = [x if x==0 else 1 for x in test_df_raw['label'].tolist()] X_train = train_df_raw['tweet'].tolist() X_test = test_df_raw['tweet'].tolist() print('At vectorizer') vectorizer = TfidfVectorizer() X_train = vectorizer.fit_transform(X_train) print('At vectorizer for test data') X_test = vectorizer.transform(X_test) print('at Classifier') classifier = LogisticRegression() classifier.fit(X_train,y_train) predictions = classifier.predict(X_test) print 'Accuracy:',accuracy_score(y_test,predictions) confusion_matrix = confusion_matrix(y_test,predictions) print(confusion_matrix) Accuracy: 0.8 [[135 42] [ 30 153]]