• Destroy、
    2019-03-05
    源代码中:
    # 从 12000 个数据中取前 2000 行作为测试集,其余作为训练集
    test_x, test_y = X[2000:],y[2000:]
    train_x, train_y = X[:2000],y[:2000]

    这个部分的代码写错了吧
    应该是:
    test_x, test_y = x[: 2000], y[: 2000]
    train_x, train_y = x[2000:], y[2000:]
    展开

    编辑回复: 您好,文章已进行更正,谢谢您的反馈。

    
     11
  • third
    2019-03-04
    结果仍然为AdaBoost算法最优。
    个人发现,前两个分类器出结果很快
    分析最优:
    1.AdaBoost算法经过了更多运算,特别是在迭代弱分类器和组合上
    2.良好组合起来的个体,能够创造更大的价值。

    决策树弱分类器准确率为 0.7867
    决策树分类器准确率为 0.7891
    AdaBoost 分类器准确率为 0.8138

    import numpy as np
    import pandas as pd
    from sklearn.model_selection import cross_val_score
    from sklearn.tree import DecisionTreeClassifier
    from sklearn.ensemble import AdaBoostClassifier
    from sklearn.feature_extraction import DictVectorizer

    # 1.数据加载
    train_data=pd.read_csv('./Titanic_Data/train.csv')
    test_data=pd.read_csv('./Titanic_Data/test.csv')

    # 2.数据清洗
    # 使用平均年龄来填充年龄中的 NaN 值
    train_data['Age'].fillna(train_data['Age'].mean(),inplace=True)
    test_data['Age'].fillna(test_data['Age'].mean(),inplace=True)
    # 均价填充
    train_data['Fare'].fillna(train_data['Fare'].mean(),inplace=True)
    test_data['Fare'].fillna(test_data['Fare'].mean(),inplace=True)
    # 使用登陆最多的港口来填充
    train_data['Embarked'].fillna('S',inplace=True)
    test_data['Embarked'].fillna('S',inplace=True)

    # 特征选择
    features=['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked']
    train_features=train_data[features]
    train_labels=train_data['Survived']
    test_features=test_data[features]

    # 将符号化的Embarked对象抽象处理成0/1进行表示
    dvec=DictVectorizer(sparse=False)
    train_features=dvec.fit_transform(train_features.to_dict(orient='record'))
    test_features=dvec.transform(test_features.to_dict(orient='record'))

    # 决策树弱分类器
    dt_stump = DecisionTreeClassifier(max_depth=1,min_samples_leaf=1)
    dt_stump.fit(train_features, train_labels)

    print(u'决策树弱分类器准确率为 %.4lf' % np.mean(cross_val_score(dt_stump, train_features, train_labels, cv=10)))

    # 决策树分类器
    dt = DecisionTreeClassifier()
    dt.fit(train_features, train_labels)

    print(u'决策树分类器准确率为 %.4lf' % np.mean(cross_val_score(dt, train_features, train_labels, cv=10)))

    # AdaBoost 分类器
    ada = AdaBoostClassifier(base_estimator=dt_stump,n_estimators=200)
    ada.fit(train_features, train_labels)

    print(u'AdaBoost 分类器准确率为 %.4lf' % np.mean(cross_val_score(ada, train_features, train_labels, cv=10)))
    展开

    编辑回复: 结果正确,一般来说AdaBoost的结果会比决策树分类器略好一些。

    
     4
  • 王彬成
    2019-03-04
    由于乘客测试集缺失真实值,采用 K 折交叉验证准确率
    --------------------
    运行结果:
    决策树弱分类器准确率为 0.7867
    决策树分类器准确率为 0.7813
    AdaBoost 分类器准确率为 0.8138
    -------------------------
    代码:
    import numpy as np
    from sklearn.tree import DecisionTreeClassifier
    from sklearn.ensemble import AdaBoostClassifier
    import pandas as pd
    from sklearn.feature_extraction import DictVectorizer
    from sklearn.model_selection import cross_val_score

    # 设置 AdaBoost 迭代次数
    n_estimators=200

    # 数据加载
    train_data=pd.read_csv('./Titanic_Data/train.csv')
    test_data=pd.read_csv('./Titanic_Data/test.csv')

    # 模块 2:数据清洗
    # 使用平均年龄来填充年龄中的 NaN 值
    train_data['Age'].fillna(train_data['Age'].mean(),inplace=True)
    test_data['Age'].fillna(test_data['Age'].mean(),inplace=True)
    # 使用票价的均值填充票价中的 nan 值
    train_data['Fare'].fillna(train_data['Fare'].mean(),inplace=True)
    test_data['Fare'].fillna(test_data['Fare'].mean(),inplace=True)
    # 使用登录最多的港口来填充登录港口Embarked的 nan 值
    train_data['Embarked'].fillna('S',inplace=True)
    test_data['Embarked'].fillna('S',inplace=True)

    # 特征选择
    features=['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked']
    train_features=train_data[features]
    train_labels=train_data['Survived']
    test_features=test_data[features]

    # 将符号化的Embarked对象处理成0/1进行表示
    dvec=DictVectorizer(sparse=False)
    train_features=dvec.fit_transform(train_features.to_dict(orient='record'))
    test_features=dvec.transform(test_features.to_dict(orient='record'))

    # 决策树弱分类器
    dt_stump = DecisionTreeClassifier(max_depth=1,min_samples_leaf=1)
    dt_stump.fit(train_features, train_labels)

    print(u'决策树弱分类器准确率为 %.4lf' % np.mean(cross_val_score(dt_stump, train_features, train_labels, cv=10)))

    # 决策树分类器
    dt = DecisionTreeClassifier()
    dt.fit(train_features, train_labels)

    print(u'决策树分类器准确率为 %.4lf' % np.mean(cross_val_score(dt, train_features, train_labels, cv=10)))

    # AdaBoost 分类器
    ada = AdaBoostClassifier(base_estimator=dt_stump,n_estimators=n_estimators)
    ada.fit(train_features, train_labels)

    print(u'AdaBoost 分类器准确率为 %.4lf' % np.mean(cross_val_score(ada, train_features, train_labels, cv=10)))
    展开

    作者回复: Good Job

    
     2
  • 梁林松
    2019-03-04
    跑第二块代码是需要引入两个模块
    from sklearn.tree import DecisionTreeRegressor
    from sklearn.neighbors import KNeighborsRegressor

    编辑回复: 对的 需要引入相应的回归类库。

    
     2
  • 骑行的掌柜J
    2019-08-14
    打错了 陈老师是对的 是回归算法😂里没有分类算法的algorithm 参数。
    
    
  • 滢
    2019-04-21
    得到结果:
    CART决策树K折交叉验证准确率: 0.39480897860892333
    AdaBoostK折交叉验证准确率: 0.4376641797318339

    from sklearn.tree import DecisionTreeRegressor
    from sklearn.ensemble import AdaBoostRegressor
    from sklearn.feature_extraction import DictVectorizer
    from sklearn.model_selection import cross_val_predict
    import pandas as pd
    import numpy as np

    #读取数据
    path = '/Users/apple/Desktop/GitHubProject/Read mark/数据分析/geekTime/data/'
    train_data = pd.read_csv(path + 'Titannic_Data_train.csv')
    test_data = pd.read_csv(path + 'Titannic_Data_test.csv')

    #数据清洗
    train_data['Age'].fillna(train_data['Age'].mean(),inplace=True)
    test_data['Age'].fillna(test_data['Age'].mean(), inplace=True)
    train_data['Embarked'].fillna('S', inplace=True)
    test_data['Embarked'].fillna('S', inplace=True)

    #特征选择
    features = ['Pclass','Sex','Age','SibSp','Parch','Embarked']
    train_features = train_data[features]
    train_result = train_data['Survived']
    test_features = test_data[features]
    devc = DictVectorizer(sparse=False)
    train_features = devc.fit_transform(train_features.to_dict(orient='record'))
    test_features = devc.fit_transform(test_features.to_dict(orient='record'))

    #构造决策树,进行预测
    tree_regressor = DecisionTreeRegressor()
    tree_regressor.fit(train_features,train_result)
    predict_tree = tree_regressor.predict(test_features)
    #交叉验证准确率
    print('CART决策树K折交叉验证准确率:', np.mean(cross_val_predict(tree_regressor,train_features,train_result,cv=10)))

    #构造AdaBoost
    ada_regressor = AdaBoostRegressor()
    ada_regressor.fit(train_features,train_result)
    predict_ada = ada_regressor.predict(test_features)
    #交叉验证准确率
    print('AdaBoostK折交叉验证准确率:',np.mean(cross_val_predict(ada_regressor,train_features,train_result,cv=10)))
    展开

    编辑回复: 准确率一般不会这么低,所以你可以查下代码中是否有错误。
    这里需要注意的是,应该是用DecisionTreeClassifier和AdaBoostClassifier,因为泰坦尼克生存预测是个分类问题(离散值),不是回归问题(连续值)。
    另外在我们在做K折交叉验证的时候,应该使用:cross_val_score
    cross_val_score 用来返回评测的准确率
    cross_val_predict 用来返回预测的分类结果
    这两处地方你调整下,再跑跑代码

    
    
  • 滨滨
    2019-04-21
    分类和回归都是做预测,分类是离散值,回归是连续值

    作者回复: 对的

    
    
  • hlz-123
    2019-03-27
    老师,在AdaBoost 与决策树模型的比较的例子中,弱分类器
    dt_stump = DecisionTreeClassfier(max_depth=1,min_samples_leaf=1)
    为什么两个参数都设置为1,相当于只有1个根节点,2个叶节点?
    而普通的决策树分类器,没有设置参数,这是什么原因?
    
    
  • 叮当猫
    2019-03-19
    fit_transform数据统一处理,求问什么时候需要?
    在我同时没有进行fit_transform的情况下,准确率:
    决策树弱分类器的准确率是0.7867
    决策树分类器的准确率是0.7734
    AdaBoost分类器的准确率是0.8161
    在我对数据同时进行fit_transform的情况下,准确率:
    决策树弱分类器的准确率是0.7867
    决策树分类器的准确率是0.7745
    AdaBoost分类器的准确率是0.8138

    以下是第一种情况:
    train_data['Embarked'] = train_data['Embarked'].map({'S':0, 'C':1, 'Q':2})
    test_data['Embarked'] = test_data['Embarked'].map({'S':0, 'C':1, 'Q':2})
    train_data['Sex'] = train_data['Sex'].map({'male':0, 'female':1})
    test_data['Sex'] = test_data['Sex'].map({'male':0, 'female':1})

    train_data['Age'].fillna(train_data['Age'].mean(), inplace=True)
    test_data['Age'].fillna(test_data['Age'].mean(), inplace=True)
    train_data['Fare'].fillna(train_data['Fare'].mean(), inplace=True)
    test_data['Fare'].fillna(test_data['Fare'].mean(), inplace=True)

    features = ['Pclass', 'Sex','Age','SibSp', 'Parch', 'Fare', 'Embarked']
    train_features = train_data[features]
    train_labels = train_data['Survived']
    test_features = test_data[features]

    #train_features = dvec.fit_transform(train_features.to_dict(orient='record'))
    #test_features = dvec.transform(test_features.to_dict(orient='record'))

    以下是第二种情况:
    #train_data['Embarked'] = train_data['Embarked'].map({'S':0, 'C':1, 'Q':2})
    #test_data['Embarked'] = test_data['Embarked'].map({'S':0, 'C':1, 'Q':2})
    #train_data['Sex'] = train_data['Sex'].map({'male':0, 'female':1})
    #test_data['Sex'] = test_data['Sex'].map({'male':0, 'female':1})

    train_data['Age'].fillna(train_data['Age'].mean(), inplace=True)
    test_data['Age'].fillna(test_data['Age'].mean(), inplace=True)
    train_data['Fare'].fillna(train_data['Fare'].mean(), inplace=True)
    test_data['Fare'].fillna(test_data['Fare'].mean(), inplace=True)

    features = ['Pclass', 'Sex','Age','SibSp', 'Parch', 'Fare', 'Embarked']
    train_features = train_data[features]
    train_labels = train_data['Survived']
    test_features = test_data[features]

    train_features = dvec.fit_transform(train_features.to_dict(orient='record'))
    test_features = dvec.transform(test_features.to_dict(orient='record'))
    展开
    
    
  • JingZ
    2019-03-05
    # AdaBoost
    一开始竟然蓦然惯性用了AdaBoostRegressor,得到0.33的准确率,最后看了小伙伴代码,立马修正

    感觉算法代码不复杂,关键要自己从空白开始写,还需多实战

    from sklearn.ensemble import AdaBoostClassifier

    # 使用 Adaboost 分类模型
    ada = AdaBoostClassifier()
    ada.fit(train_features, train_labels)

    pred_labels = ada.predict(test_features)

    acc_ada_classifier = round(ada.score(train_features, train_labels), 6)
    print(u'Adaboost score 准确率为 %.4lf' % acc_ada_classifier)
    print(u'Adaboost cross_val_score 准确率为 %.4lf' % np.mean(cross_val_score(ada, train_features, train_labels, cv=10)))

    运行
    Adaboost score 准确率为 0.8339
    Adaboost cross_val_score 准确率为 0.8104
    展开

    作者回复: Good Job

    
    
  • FORWARD―MOUNT
    2019-03-05
    老师,房价预测这个算法,50个弱分类器是怎么来的?

    作者回复: AdaBoost自己计算的

    
    
  • 佳佳的爸
    2019-03-04
    你好老师,完整的源代码在哪里可以下载到? 我说的是每节课里边的源代码。

    作者回复: https://github.com/cystanford

    
    
我们在线,来聊聊吧