1,364
社区成员




这是阿里云天池大赛里面的项目,相关数据集可在阿里云学习赛【教学赛】金融数据分析赛题2:保险反欺诈预测中下载
以保险风控为背景,保险是重要的金融体系,对社会发展,民生保障起到重要作用。保险欺诈近些年层出不穷,在某些险种上保险欺诈的金额已经占到了理赔金额的20%甚至更多。对保险欺诈的识别成为保险行业中的关键应用场景。
库只用到pandas 数据根据自己的路径修改
import pandas as pd
# 数据加载
train = pd.read_csv('./wen/train.csv')
test = pd.read_csv('./wen/test.csv')
data = pd.concat([train, test], axis=0)
data.index = range(len(data))
## 数据探索
data.isnull().sum()
# 唯一值个数
for col in data.columns:
print(col, data[col].nunique())
#标题
cat_columns = data.select_dtypes(include='O').columns
column_name = []
unique_value = []
for col in cat_columns:
#print(col, data[col].nunique())
column_name.append(col)
unique_value.append(data[col].nunique())
df = pd.DataFrame()
df['col_name'] = column_name
df['value'] = unique_value
df = df.sort_values('value', ascending=False)
#查看
data[cat_columns]
#单独看某个字段
data['property_damage'].value_counts()
data['property_damage'] = data['property_damage'].map({'NO': 0, 'YES': 1, '?': 2})
data['property_damage'].value_counts()
data['police_report_available'].value_counts()
data['police_report_available'] = data['police_report_available'].map({'NO': 0, 'YES': 1, '?': 2})
data['police_report_available'].value_counts()
# policy_bind_date, incident_date
data['policy_bind_date'] = pd.to_datetime(data['policy_bind_date'])
data['incident_date'] = pd.to_datetime(data['incident_date'])
# 查看最大日期,最小日期
data['policy_bind_date'].min() # 1990-01-08
data['policy_bind_date'].max() # 2015-02-22
data['incident_date'].min() # 2015-01-01
data['incident_date'].max() # 2015-03-01
base_date = data['policy_bind_date'].min()
# 转换为date_diff
data['policy_bind_date_diff'] = (data['policy_bind_date'] - base_date).dt.days
data['incident_date_diff'] = (data['incident_date'] - base_date).dt.days
data['incident_date_policy_bind_date_diff'] = data['incident_date_diff'] - data['policy_bind_date_diff']
data[['policy_bind_date', 'incident_date', 'policy_bind_date_diff', 'incident_date_diff', 'incident_date_policy_bind_date_diff']]
# 去掉原始日期字段 policy_bind_date incident_date
data.drop(['policy_bind_date', 'incident_date'], axis=1, inplace=True)
data.drop(['policy_id'], axis=1, inplace=True)
data.columns
## 标签编码
from sklearn.preprocessing import LabelEncoder
cat_columns = data.select_dtypes(include='O').columns
for col in cat_columns:
le = LabelEncoder()
data[col] = le.fit_transform(data[col])
data[cat_columns]
# 数据集切分
train = data[data['fraud'].notnull()]
test = data[data['fraud'].isnull()]
import lightgbm as lgb
model_lgb = lgb.LGBMClassifier(
num_leaves=2**5-1, reg_alpha=0.25, reg_lambda=0.25, objective='binary',
max_depth=-1, learning_rate=0.005, min_child_samples=3, random_state=2022,
n_estimators=2000, subsample=1, colsample_bytree=1,
)
# 模型训练
model_lgb.fit(train.drop(['fraud'], axis=1), train['fraud'])
# AUC评测: 以proba进行提交,结果会更好
y_pred = model_lgb.predict_proba(test.drop(['fraud'], axis=1))
y_pred
result = pd.read_csv('./submission.csv')
result['fraud'] = y_pred[:, 1]
result.to_csv('./baseline.csv', index=False)
就做了简单的编码操作和时间戳特征,没有进行其他复杂操作,所以提升的空间还非常大,大家可以做更加复杂的特征工程,也可以深层次地研究数据业务逻辑构建有效特征,或者模型融合,这些都可以提升分数。