算法总结

分类 回归
分类树tree.DecisionTreeClassififier 回归树tree.DecisionTreeRegressor
随机森林ensemble.RandomForestClassifier 回归森林ensemble.RandomForestRegressor
逻辑回归linear_model.LogisticRegression
KNN neighbors .KNeighborsClassifier
Naive Bayes
梯度提升树ensemble.GradientBoostingRegressor
SVC svm.SVC
1
Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, Y, test_size=0.3,random_state=0)

不对分类型变量做无量纲化处理

1
2
3
4
5
6
7
8
col = X.columns.tolist()
cate = X.columns[X.dtypes == "object"].tolist()
for i in cate:
   col.remove(i)
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
ss = ss.fit(X.loc[:,col])
X.loc[:,col] = ss.transform(X.loc[:,col])
1
df.describe([0.01,0.05,0.1,0.25,0.5,0.75,0.9,0.99]).T
  • 训练集特征和预测集特征plot一下,看看分布像不像,看看后面需要防止过拟合,交叉验证。

数据预处理

1
2
3
# 观察连续变量,distplot
import seaborn as sns
sns.distplot(dataset_raw['fnlwgt'])

分类

回归树

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 实例化一个分类器
clf = tree.DecisionTreeClassifier()
# 训练
clf = clf.fit(Xtrain, Ytrain)
# 评价结果
score = clf.score(Xtest, Ytest)

# 学习曲线
n = np.linspace(5,16,10)
score=[]
for i in n:
clf = DecisionTreeClassifier(max_depth=i)
score.append(cross_val_score(clf,X,y,cv=10).mean())
print(max(score), int(n[score.index(max(score))]))
plt.plot(n,score)
plt.show()


# 网格搜索
gini_thresholds = np.linspace(0,0.5,20)
parameters = {
"criterion":('gini','entropy')
,"splitter":('best','random')
,'max_depth':[*range(1,11)]
,'min_samples_leaf':[*range(1,50,5)]
,'min_impurity_decrease':[*gini_thresholds]

}

clf = DecisionTreeClassifier(random_state=25)
GS = GridSearchCV(clf,parameters,cv=10)
GS.fit(Xtrain,Ytrain)

随机森林

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

# 填补缺失
X_missing_reg = X_missing.copy()
sortindex = np.argsort(X_missing_reg.isnull().sum(axis=0)).values
for i in sortindex:
   
   #构建我们的新特征矩阵和新标签
   df = X_missing_reg
   fillc = df.iloc[:,i]
   df = pd.concat([df.iloc[:,df.columns != i],pd.DataFrame(y_full)],axis=1)
   
   #在新特征矩阵中,对含有缺失值的列,进行0的填补
   df_0 =SimpleImputer(missing_values=np.nan,
                       strategy='constant',fill_value=0).fit_transform(df)
   
   #找出我们的训练集和测试集
   Ytrain = fillc[fillc.notnull()]
   Ytest = fillc[fillc.isnull()]
   Xtrain = df_0[Ytrain.index,:]
   Xtest = df_0[Ytest.index,:]
   
   #用随机森林回归来填补缺失值
   rfc = RandomForestRegressor(n_estimators=100)
   rfc = rfc.fit(Xtrain, Ytrain)
   Ypredict = rfc.predict(Xtest)
   
   #将填补好的特征返回到我们的原始的特征矩阵中
X_missing_reg.loc[X_missing_reg.iloc[:,i].isnull(),X_missing_reg.columns[i]] = Ypredict

梯度提升森林

1
2
3
4
5
6
7
8
9
10
11
12
13
gbc = GradientBoostingClassifier().fit(Xtrain,Ytrain)
gbc.score(Xtest,Ytest)

# 学习曲线
%%time
n = np.linspace(329,339,10)
score=[]
for i in n:
gbc = GradientBoostingClassifier(n_estimators=int(i)).fit(Xtrain,Ytrain)
score.append(gbc.score(Xtest,Ytest))
print(max(score), int(n[score.index(max(score))]))
plt.plot(n,score)
plt.show()

SVC(支持向量机需要标准化) 找少数类相对擅长

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(X)#将数据转化为0,1正态分布

Xtrain, Xtest, Ytrain, Ytest = train_test_split(X,y,test_size=0.3,random_state=420)

Kernel = ["linear","poly","rbf","sigmoid"]

for kernel in Kernel:
clf= SVC(kernel = kernel
, gamma="auto"
, degree = 1
, cache_size=5000
).fit(Xtrain,Ytrain)
print("The accuracy under kernel %s is %f" % (kernel,clf.score(Xtest,Ytest)))

# 调试rbf
score = []
gamma_range = np.logspace(-10, 1, 50) #返回在对数刻度上均匀间隔的数字
for i in gamma_range:
clf = SVC(kernel="rbf",gamma = i,cache_size=5000).fit(Xtrain,Ytrain)
score.append(clf.score(Xtest,Ytest))

print(max(score), gamma_range[score.index(max(score))])
plt.plot(gamma_range,score)
plt.show()

# 网格搜索rbf
from sklearn.model_selection import StratifiedShuffleSplit#用于支持带交叉验证的网格搜索
from sklearn.model_selection import GridSearchCV#带交叉验证的网格搜索

time0 = time()

gamma_range = np.logspace(-10,1,20)
coef0_range = np.linspace(0,5,10)

param_grid = dict(gamma = gamma_range
,coef0 = coef0_range)
cv = StratifiedShuffleSplit(n_splits=5, test_size=0.3, random_state=420)#将数据分为5份,5份数据中测试集占30%
grid = GridSearchCV(SVC(kernel = "poly",degree=1,cache_size=5000
,param_grid=param_grid
,cv=cv)
grid.fit(X, y)

print("The best parameters are %s with a score of %0.5f" % (grid.best_params_,
grid.best_score_))
print(time()-time0)


#调线性核函数
score = []
C_range = np.linspace(0.01,30,50)
for i in C_range:
clf = SVC(kernel="linear",C=i,cache_size=5000).fit(Xtrain,Ytrain)
score.append(clf.score(Xtest,Ytest))
print(max(score), C_range[score.index(max(score))])
plt.plot(C_range,score)
plt.show()

#换rbf
score = []
C_range = np.linspace(0.01,30,50)
for i in C_range:
clf = SVC(kernel="rbf",C=i,gamma = 0.012742749857031322,cache_size=5000).fit(Xtrain,Ytrain)
score.append(clf.score(Xtest,Ytest))

print(max(score), C_range[score.index(max(score))])
plt.plot(C_range,score)
plt.show()

#进一步细化
score = []
C_range = np.linspace(5,7,50)
for i in C_range:
clf = SVC(kernel="rbf",C=i,gamma =
0.012742749857031322,cache_size=5000).fit(Xtrain,Ytrain)
score.append(clf.score(Xtest,Ytest))

print(max(score), C_range[score.index(max(score))])
plt.plot(C_range,score)
plt.show()


#解决样本不均衡 class_weight(一般比较少用,误伤严重)
#设定class_weight
wclf = svm.SVC(kernel='linear', class_weight={1: 10})
wclf.fit(X, y)


# 召回率
from sklearn.metrics import roc_auc_score, recall_score
resutl = clf.predict(Xtest)
recall = recall_score(Ytest,result)
# 提高召回率
class_weight = {1:10} # 类别:权重
# 平衡
class_weigth ='balanced'


# 调参
irange = np.linspace(0.01,0.05,10)
for i in irange:
   times = time()
   clf = SVC(kernel = "linear"
            ,gamma="auto"
            ,cache_size = 5000
            ,class_weight = {1:1+i}
            ).fit(Xtrain, Ytrain)
   result = clf.predict(Xtest)
   score = clf.score(Xtest,Ytest)
   recall = recall_score(Ytest, result)
   auc = roc_auc_score(Ytest,clf.decision_function(Xtest))
   print("under ratio 1:%f testing accuracy %f, recall is %f', auc is %f" % (1+i,score,recall,auc))
   print(datetime.datetime.fromtimestamp(time()-times).strftime("%M:%S:%f"))

分类OneKey

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
%%time
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression,RidgeClassifier,SGDClassifier
from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier,AdaBoostClassifier
from sklearn.svm import SVC,LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import cross_val_score

def get_models(models={}):
models['LogisticRegression']=LogisticRegression()
models['RidgeClassifier']=RidgeClassifier()
models['SGDClassifier']=SGDClassifier()
models['RandomForestClassifier']=RandomForestClassifier()
models['GradientBoostingClassifier']=GradientBoostingClassifier()
models['AdaBoostClassifier']=AdaBoostClassifier()
models['KNN'] = KNeighborsClassifier()
models['GaussianNB'] = GaussianNB()
models['SVC']=SVC()
models['LinearSVC']=LinearSVC()
return models

def make_pipe(model):
s = [('MinMaxScaler',MinMaxScaler()),('model',model)]
return Pipeline(steps=s)

def pred(model,x):
return model.predict(x)

def fit(model,x,y):
return model.fit(x,y)

def valid(model,x,y,cv=10):
return cross_val_score(model,x,y,cv=cv)

s={}
for n,m in get_models().items():
pipe = make_pipe(m)
s[n] = valid(pipe,x,y).mean()

order = sorted(s.items(),key=lambda x:x[1],reverse=True)
order


plot_data = pd.DataFrame(data=order)
plot_data.columns = ['algo','score']
plot_data.set_index('algo').plot.bar(figsize=(10,6))
plt.xticks(rotation=360)
f'%s is the best,score is %f'%(order[0][0],order[0][1])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
def make_pipeline(model,X, Y):
steps = list()
# standardization
steps.append(('standardize', StandardScaler()))
# normalization
steps.append(('normalize', MinMaxScaler()))
# the model
steps.append(('model', model))
# create pipeline
pipeline = Pipeline(steps)
return cross_val_score(pipeline,X,Y,cv=4).mean()


dic = {}
models = get_models()
for name, model in models.items():
dic[name] = make_pipeline(model,X,Y)
print('model %s is done,score is %f'%(name,dic[name]))
order=sorted(dic.items(),key=lambda x:x[1],reverse=True)

分类调参Onekey

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
```



## 回归

### 回归OneKey

```python
%%time
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn.linear_model import Ridge
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import HuberRegressor
from sklearn.linear_model import Lars
from sklearn.linear_model import LassoLars
from sklearn.linear_model import PassiveAggressiveRegressor
from sklearn.linear_model import RANSACRegressor
from sklearn.linear_model import SGDRegressor
from sklearn.ensemble import RandomForestRegressor,GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
import matplotlib.pyplot as plt
# prepare a list of ml models
def get_models(models=dict()):
# linear models
models['rfr']=RandomForestRegressor()
models['gbr']=GradientBoostingRegressor()
models['lr'] = LinearRegression()
models['lasso'] = Lasso()
models['ridge'] = Ridge()
models['en'] = ElasticNet()
models['huber'] = HuberRegressor()
models['lars'] = Lars()
models['llars'] = LassoLars()
models['pa'] = PassiveAggressiveRegressor(max_iter=1000, tol=1e-3)
models['ranscac'] = RANSACRegressor()
models['sgd'] = SGDRegressor(max_iter=1000, tol=1e-3)
print('Defined %d models' % len(models))
return models

# create a feature preparation pipeline for a model
def make_pipeline(model):
steps = list()
# standardization
steps.append(('standardize', StandardScaler()))
# normalization
steps.append(('normalize', MinMaxScaler()))
# the model
steps.append(('model', model))
# create pipeline
pipeline = Pipeline(steps)
return pipeline

def cross_validation(pipeline,X,Y):
return cross_val_score(pipeline,X,Y,cv=10).mean()

def validation(pipeline,Xtrain,Ytrain,Xtest,Ytest):
pipeline.fit(Xtrain,Ytrain)
return pipeline.score(Xtest,Ytest)

dic = {}
models = get_models()
for name, model in models.items():
pipeline = make_pipeline(model)
# dic[name] = cross_validation(pipeline,Xtrain,Ytrain)
dic[name] = validation(pipeline,Xtrain,Ytrain,Xtest,Ytest)
print('model %s is done,score is %f'%(name,dic[name]))
order=sorted(dic.items(),key=lambda x:x[1],reverse=True)

plot_data = pd.DataFrame(data=order)
plot_data.columns = ['algo','score']
plot_data.set_index('algo').plot.bar(figsize=(10,6))
plt.xticks(rotation=360)
f'%s is the best,score is %f'%(order[0][0],order[0][1])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# create a feature preparation pipeline for a model
def make_pipeline(model,Xtrain,Ytrain,Xtest,Ytest):
steps = list()
# standardization
steps.append(('standardize', StandardScaler()))
# normalization
steps.append(('normalize', MinMaxScaler()))
# the model
steps.append(('model', model))
# create pipeline
pipeline = Pipeline(steps).fit(Xtrain,Ytrain)
return pipeline.score(Xtest,Ytest)

dic = {}
models = get_models()
for name, model in models.items():
dic[name] = make_pipeline(model,Xtrain,Ytrain,Xtest,Ytest)
print('model %s is done,score is %f'%(name,dic[name]))
order=sorted(dic.items(),key=lambda x:x[1],reverse=True)

模板

1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
import pandas as pd
import numpy as np

df.describe([0.01,0.05,0.1,0.25,0.5,0.75,0.9,0.99]).T
df.describe(include=['O'])
# 提取连续,离散的数据
col = df.columns.tolist()
cate = df.columns[df.dtypes == "object"].tolist()
for i in cate:
   col.remove(i)
# 转换
df['a'] = df['a'].replace('[\$,)]','',regex=True).astype(float)
# 离散一般填众数,连续填均值
df['b']=df['b'].fillna(df.b.mean())
df['c']=df['c'].fillna(df.b.mode()[0])
# 哑变量操作
cate = df.columns[df.dtypes == "object"].tolist()
cate
for c in cate:
df = pd.concat([df,pd.get_dummies(df[c],prefix=str(c))],axis=1)
df.drop(c,axis=1,inplace=True)

# 变量操作
def getTitle(name):
str1=name.split(',')[1] #取头衔.姓
str2=str1.split('.')[0] #取头衔
#strip() 移除字符串头尾指定的字符
str3=str2.strip()
return str3
titleDf = pd.DataFrame()
#map方法---对series里面每个数据应用自定义函数计算
titleDf['Title'] = full['Name'].map(getTitle)
# 根据其他列数据新建onehot
df['month'] = df['Date'].dt.month
familyDf = pd.DataFrame()
familyDf['FamilySize'] = full['Parch']+full['SibSp']+1
familyDf['Family_Single'] = familyDf['FamilySize'].map(lambda s : 1 if s == 1 else 0 )
familyDf['Family_Small'] = familyDf['FamilySize'].map(lambda s : 1 if 2<= s <= 4 else 0 )
familyDf['Family_Large'] = familyDf['FamilySize'].map(lambda s : 1 if 5<= s else 0 )

# 查看相关性
corrDf = full.corr()
survived_rel=corrDf['Survived'].sort_values(ascending = False)

泰坦尼克

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
import pandas as pd
import numpy as np

train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')

train.head()

train.info()

train.describe([0.01,0.1,0.25,0.5,0.75,0.9,0.99]).T

train.isna().sum()

train.Age.fillna(train.Age.mean(),inplace=True)
train['Embarked'].fillna(train.Embarked.mode()[0],inplace=True)

train.Cabin.value_counts()

train.Cabin.fillna('Unknown',inplace=True)

train.set_index('PassengerId',inplace=True)

train['FamilySize'] = train['SibSp']+train['Parch']+1
train.drop(['SibSp','Parch'],axis=1,inplace=True)

train.Cabin = train.Cabin.map(lambda x : x[0])
train.Cabin.value_counts()

train['Name'].map(lambda x: x.split(',')[1].split('.')[0]).value_counts()
train['Name']=train['Name'].map(lambda x: x.split(',')[1].split('.')[0])

train.drop('Ticket',axis=1,inplace=True)

col = train.columns[train.dtypes == "object"].tolist()
col

for c in col:
train = pd.concat([train,pd.get_dummies(train[c],prefix=str(c))],axis=1)
train.drop(c,inplace=True,axis=1)

corrDf = train.corr()
survived_rel=corrDf['Survived'].sort_values(ascending = False)

Ytrain = train['Survived']
Xtrain = train.drop('Survived',axis=1)

%%time
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score
import matplotlib.pyplot as plt
# prepare a list of ml models
def get_models(models=dict()):
# linear models
models['gbc'] = GradientBoostingClassifier()
models['rfc'] = RandomForestClassifier()
models['lr'] = LogisticRegression()
models['KNN'] = KNeighborsClassifier()
models['GaussianNB'] = GaussianNB()
models['sgd'] = SGDClassifier()
models['lsvc'] =LinearSVC()
models['svc'] = SVC()
print('Defined %d models' % len(models))
return models
# create a feature preparation pipeline for a model
def make_pipeline(model,X, Y):
steps = list()
# standardization
steps.append(('standardize', StandardScaler()))
# normalization
steps.append(('normalize', MinMaxScaler()))
# the model
steps.append(('model', model))
# create pipeline
pipeline = Pipeline(steps)
return cross_val_score(pipeline,X,Y,cv=4).mean()
dic = {}
models = get_models()
for name, model in models.items():
dic[name] = make_pipeline(model,Xtrain,Ytrain)
print('model %s is done,score is %f'%(name,dic[name]))
order=sorted(dic.items(),key=lambda x:x[1],reverse=True)


plot_data = pd.DataFrame(data=order)
plot_data.columns = ['algo','score']
plot_data.set_index('algo').plot.bar(figsize=(10,6))
plt.xticks(rotation=360)
f'%s is the best,score is %f'%(order[0][0],order[0][1])

调优

岭回归可以解决特征间的精确相关关系导致的最小二乘法无法使用的问题,而Lasso不行。

由于Lasso对正则化系数的变动过于敏感,因此我们往往让 在很小的空间中变动

Ridge

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
housevalue = fch()
X = pd.DataFrame(housevalue.data) y = housevalue.target
X.columns = ["住户收入中位数","房屋使用年代中位数","平均房间数目"
          ,"平均卧室数目","街区人口","平均入住率","街区的纬度","街区的经度"]
Ridge_ = RidgeCV(alphas=np.arange(1,1001,100)
               #,scoring="neg_mean_squared_error"
                ,store_cv_values=True
               #,cv=5
              ).fit(X, y)

Ridge_.score(X,y) #调用所有交叉验证的结果
Ridge_.cv_values_.shape
#进行平均后可以查看每个正则化系数取值下的交叉验证结果
Ridge_.cv_values_.mean(axis=0) #查看被选择出来的最佳正则化系数
Ridge_.alpha_

Lasso

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
from sklearn.linear_model import LassoCV
#自己建立Lasso进行alpha选择的范围
alpharange = np.logspace(-10, -2, 200,base=10) #其实是形成10为底的指数函数
#10**(-10)到10**(-2)次方
alpharange.shape
Xtrain.head()
lasso_ = LassoCV(alphas=alpharange #自行输入的alpha的取值范围
              ,cv=5 #交叉验证的折数
              ).fit(Xtrain, Ytrain) #查看被选择出来的最佳正则化系数
lasso_.alpha_
#调用所有交叉验证的结果
lasso_.mse_path_
lasso_.mse_path_.shape #返回每个alpha下的五折交叉验证结果
lasso_.mse_path_.mean(axis=1) #有注意到在岭回归中我们的轴向是axis=0吗?
#在岭回归当中,我们是留一验证,因此我们的交叉验证结果返回的是,每一个样本在每个alpha下的交叉验证结果
#因此我们要求每个alpha下的交叉验证均值,就是axis=0,跨行求均值
#而在这里,我们返回的是,每一个alpha取值下,每一折交叉验证的结果
#因此我们要求每个alpha下的交叉验证均值,就是axis=1,跨列求均值
#最佳正则化系数下获得的模型的系数结果
lasso_.coef_
lasso_.score(Xtest,Ytest) #与线性回归相比如何?
reg = LinearRegression().fit(Xtrain,Ytrain)
reg.score(Xtest,Ytest) #使用lassoCV自带的正则化路径长度和路径中的alpha个数来自动建立alpha选择的范围
ls_ = LassoCV(eps=0.00001
            ,n_alphas=300
            ,cv=5
              ).fit(Xtrain, Ytrain)
ls_.alpha_
ls_.alphas_ #查看所有自动生成的alpha取值
ls_.alphas_.shape
ls_.score(Xtest,Ytest)
ls_.coef_

多步长时间序列

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
# import
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
import seaborn as sns
pd.set_option('display.max_columns', None) # 展示所有列

# 数据处理
df = pd.read_csv('data.csv')
df['Date'] = pd.to_datetime(df['Date'])
df['month'] = df['Date'].dt.month
df['dayofweek'] = df['Date'].dt.dayofweek
df[['holiday','month','dayofweek']] = df[['holiday','month','dayofweek']].astype('object')
df = pd.get_dummies(df)
df.drop('Season',inplace=True,axis=1)
df.info()
df.set_index('Date',inplace=True)
data = df.copy()

# 相关函数
def make_lag(df,l_beg = 1,l_end = 32):
for i in range(l_beg,l_end):
df['lag_{}'.format(i)] = df.DailyElectricity.shift(i)
return df

def split_data(n_days,df,indx):
Y_val = df.iloc[-n_days:,indx].copy()
df.iloc[-n_days:,indx]=0
make_lag(df)
X_val = df.iloc[-n_days:].drop('DailyElectricity',axis=1)
df.dropna(inplace=True)
X = df[:-n_days].drop('DailyElectricity',axis=1)
Y = df[:-n_days]['DailyElectricity']
return X,Y,X_val,Y_val

df = data.copy()
X,Y,X_val,Y_val = split_data(365,df,0)

from sklearn.pipeline import Pipeline
from sklearn.ensemble import AdaBoostRegressor,RandomForestRegressor,GradientBoostingRegressor
from sklearn.linear_model import LinearRegression,Lasso,LassoLars,SGDRegressor,Ridge
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score

def get_models(models={}):
models['AdaBoostRegressor']=AdaBoostRegressor()
models['RandomForestRegressor']=RandomForestRegressor()
models['GradientBoostingRegressor']=GradientBoostingRegressor()
models['LinearRegression']=LinearRegression()
models['Lasso']=Lasso()
models['LassoLars']=LassoLars()
models['SGDRegressor']=SGDRegressor()
models['Ridge']=Ridge()
return models

def make_pipe(model):
steps = [('s',StandardScaler()),('m',model)]
return Pipeline(steps)

def val(model,x,y,cv=10):
return cross_val_score(model,X,Y,cv=cv).mean()

s = {}
for n,m in get_models().items():
p = make_pipe(m)
s[n]=val(m,X,Y)
order = sorted(s.items(),key = lambda x:x[1],reverse=True)
order

pipe = make_pipe(GradientBoostingRegressor())
best_model = pipe.fit(X,Y)

def pred_date(X_val):
res = []
for i in range(X_val.shape[0]):
pred = best_model.predict(np.array(X_val.iloc[i]).reshape(1,-1))
res.append(pred)
if i+1 < X_val.shape[0]:
X_val.iloc[i+1,-30:] = X_val.iloc[i,-31:-1]
X_val.iloc[i+1,-31]=pred
return X_val,res

X_val,res = pred_date(X_val)
from sklearn.metrics import r2_score,mean_squared_error
mean_squared_error(Y_val,res)
r2_score(Y_val,res)

踩坑

  • 关于数据预处理
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
'''
预处理的时候不能
scaler.fit_trainsform(Xtest)
或者
scaler.fit(Xtest)
scaler.transform(Xtest)
因为会导致数据泄露
'''
scaler = StandardScaler() #实例化
scaler.fit(Xtrain)
X_train =scaler.transform(Xtrain)
X_test =scaler.transform(Xtest)
norm = MinMaxScaler()
norm.fit(Xtrain)
X_train = norm.transform(Xtrain)
X_test = norm.transform(Xtest)

scaler = StandardScaler() #实例化
X_train =scaler.fit_transform(Xtrain)
X_test =scaler.transform(Xtest)
norm = MinMaxScaler()
norm.fit(Xtrain)
X_train = norm.fit_transform(Xtrain)
X_test = norm.transform(Xtest)
  • 关于预测

    预测的时候要将整个数据拿去fit,然后再去predict

Trick

  • get_dummies的时候最好先把 预测集和测试集合并再分开X[:x.shape[0]]

  • factorized转换回去

    1
    2
    3
    4
    5
    6
    orig_col = ['b', 'b', 'a', 'c', 'b']
    labels, uniques = pd.factorize(orig_col)

    # To get original list back
    uniques[labels]
    # array(['b', 'b', 'a', 'c', 'b'], dtype=object)
  • pandas改变行序列

    1
    2
    col=['a','c','d']
    df[col]
  • 按月份统计

    • A 星期
    • B 月份

    df.groupby(df['date'].dt.strftime('%B'))['price'].mean()

    df['month'] = df['Date'].dt.month

  • 排序后按自定义顺序显示

    cat = ['a','b','c']

    df.groupby('f').reindex(cat)

  • 透视

    df = df.groupby('name','year')['gdp'].sum().reindex()

    df.pivot(index='name',columns='year',values='gdp')

    ![透视前](/Users/disda/Library/Application Support/typora-user-images/image-20220310161405868.png)

    ![透视后](/Users/disda/Library/Application Support/typora-user-images/image-20220310161422071.png)

    定类数据

    字符型,此类数据只代表“类别”,类别与类别之间没有必然的相关关系,提供的信息量也最少。例如(颜色:红色,黄色,蓝色,绿色)

    处理方式:one-hot编码

    定序数据

    字符型,此类数据代表“类别”的同时类别与类别之间可以比较,有顺序。例如(品质:优,良,中,差)

    处理方式:对字符型数值赋值

    定距数据

    数值型,可加减。用于统计计数。例如(卧室数量)

    处理方式:极差缩放,标准化

    定比数据

    数值型,可加减乘除,有绝对“0”值的概念。例如(工资,¥100是¥50的2倍)

    处理方式:极差缩放,数值归一化,标准化