蛋白序列 | 基于深度学习的蛋白质序列家族分类

举报
DrugAI 发表于 2021/07/15 05:20:20 2021/07/15
【摘要】 背景简介 蛋白质数据集来自于结构生物信息学研究协作组织(RCSB)的蛋白质数据库(PDB)。 RCSB :  Research Collaboratory for Structural Bioinformatics PDB :  Protein Data Bank PDB是原子坐标和描述蛋白质和其他重要生物大分子的信息储存库。结构生物学家使用诸如X...

背景简介

蛋白质数据集来自于结构生物信息学研究协作组织(RCSB)的蛋白质数据库(PDB)。

RCSB :  Research Collaboratory for Structural Bioinformatics

PDB :  Protein Data Bank

PDB是原子坐标和描述蛋白质和其他重要生物大分子的信息储存库。结构生物学家使用诸如X射线晶体学、NMR和低温电子显微术的方法来确定分子中每个原子的相对于彼此的位置。

不断发展的PDB反映了世界各地实验室正在进行的研究,使得在研究和教育中使用数据库既令人兴奋又具有挑战性。结构可用于生命过程中涉及的许多蛋白质和核酸,因此可以访问PDB数据库查找核糖体、癌基因、药物靶标甚至整个病毒的结构。但是,找到所需信息可能是一项挑战,因为PDB会存在许多不同的结构,经常发现给定分子或部分结构的多种结构,或已经从其天然形式修饰或失活的结构。

蛋白质序列家族分类

根据氨基酸序列对蛋白质家族进行分类。 工作基于自然语言处理(NLP)中深度学习模型,并假设蛋白质序列可以被视为一种语言。 



  
  1. #import library
  2. import numpy as np
  3. import pandas as pd
  4. from keras.models import Sequential
  5. from keras.layers import Dense, Conv1D, MaxPooling1D, Flatten,AtrousConvolution1D,SpatialDropout1D, Dropout
  6. from keras.layers.normalization import BatchNormalization
  7. from keras.layers import LSTM
  8. from keras.layers.embeddings import Embedding
  9. from keras.callbacks import EarlyStopping
  10. from keras.preprocessing import text, sequence
  11. from keras.preprocessing.text import Tokenizer
  12. from sklearn.model_selection import train_test_split
  13. %matplotlib inline

  
  1. # Merge the two Data set together
  2. # Drop duplicates
  3. df = pd.read_csv('pdbdata_no_dups.csv').merge(pd.read_csv('pdbdata_seq.csv'), how='inner', on='structureId').drop_duplicates(["sequence"]) # ,"classification"
  4. # Drop rows with missing labels
  5. df = df[[type(c) == type('') for c in df.classification.values]]
  6. df = df[[type(c) == type('') for c in df.sequence.values]]
  7. # select proteins
  8. df = df[df.macromoleculeType_x == 'Protein']
  9. df.reset_index()
  10. print(df.shape)
  11. df.head()
(87761, 18)


  
  1. print(df.residueCount_x.quantile(0.9))
  2. df.residueCount_x.describe()
count 87761.000000
mean 922.913937
std 3173.118920
min 3.000000
25% 234.000000
50% 451.000000
75% 880.000000
max 157478.000000
Name: residueCount_x, dtype: float64

  
  1. df = df.loc[df.residueCount_x<1200]
  2. print(df.shape[0])
  3. df.residueCount_x.describe()
count 73140.000000
mean 433.599918
std 289.671609
min 3.000000
25% 198.000000
50% 373.000000
75% 618.000000
max 1198.000000
Name: residueCount_x, dtype: float64

EDA


  
  1. import matplotlib.pyplot as plt
  2. from collections import Counter
  3. # count numbers of instances per class
  4. cnt = Counter(df.classification)
  5. # select only K most common classes! - was 10 by default
  6. top_classes = 10
  7. # sort classes
  8. sorted_classes = cnt.most_common()[:top_classes]
  9. classes = [c[0] for c in sorted_classes]
  10. counts = [c[1] for c in sorted_classes]
  11. print("at least " + str(counts[-1]) + " instances per class")
  12. # apply to dataframe
  13. print(str(df.shape[0]) + " instances before")
  14. df = df[[c in classes for c in df.classification]]
  15. print(str(df.shape[0]) + " instances after")
  16. seqs = df.sequence.values
  17. lengths = [len(s) for s in seqs]
  18. # visualize
  19. fig, axarr = plt.subplots(1,2, figsize=(20,5))
  20. axarr[0].bar(range(len(classes)), counts)
  21. plt.sca(axarr[0])
  22. plt.xticks(range(len(classes)), classes, rotation='vertical')
  23. axarr[0].set_ylabel('frequency')
  24. axarr[1].hist(lengths, bins=50, normed=False)
  25. axarr[1].set_xlabel('sequence length')
  26. axarr[1].set_ylabel('# sequences')
  27. plt.show()

转换标签

 

首先,数据集被简化为十个最常见类之一的样本。 序列的长度范围从极少数氨基酸到几千个氨基酸。

其次,使用sklearn.preprocessing中的LabelBinarizer将字符串中的标签转换为一个one hot表示。


  
  1. from sklearn.preprocessing import LabelBinarizer
  2. # Transform labels to one-hot
  3. lb = LabelBinarizer()
  4. Y = lb.fit_transform(df.classification)

用keras进一步预处理序列

使用keras库进行文本处理:

Tokenizer:将序列的每个字符转换为数字
pad_sequences:确保每个序列具有相同的长度(max_length)。
train_test_split:将数据分成训练和测试样本。


  
  1. # maximum length of sequence, everything afterwards is discarded! Default 256
  2. # max_length = 256
  3. max_length = 300
  4. #create and fit tokenizer
  5. tokenizer = Tokenizer(char_level=True)
  6. tokenizer.fit_on_texts(seqs)
  7. #represent input data as word rank number sequences
  8. X = tokenizer.texts_to_sequences(seqs)
  9. X = sequence.pad_sequences(X, maxlen=max_length)

建立keras模型并适合

NLP中取得的成功建议使用已经实现为keras层(嵌入)的字嵌入。本例中,只有20个不同的单词(每个氨基酸)。可以尝试对序列的one hot表示进行2D卷积,而不是单词嵌入,但是这种方法侧重于将NLP理论应用于蛋白质序列。

为了提高性能,还可以使用深层体系结构(后续的卷积和池化层),这里有两层,其中第一层有64个滤波器,卷积大小为6,第二层有32个滤波器,大小为3。
将激活平展并传递到完全相同的层,其中最后一层是softmax激活,并且大小对应于类的数量。

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=.2)
 

  
  1. embedding_dim = 11 # orig 8
  2. # create the model
  3. model = Sequential()
  4. model.add(Embedding(len(tokenizer.word_index)+1, embedding_dim, input_length=max_length))
  5. # model.add(Conv1D(filters=64, kernel_size=6, padding='same', activation='relu')) #orig
  6. model.add(Conv1D(filters=128, kernel_size=9, padding='valid', activation='relu',dilation_rate=1))
  7. # model.add(BatchNormalization())
  8. model.add(MaxPooling1D(pool_size=2))
  9. # model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu')) # orig
  10. model.add(Conv1D(filters=64, kernel_size=4, padding='valid', activation='relu'))
  11. model.add(BatchNormalization())
  12. model.add(MaxPooling1D(pool_size=2))
  13. model.add(Flatten())
  14. model.add(Dense(200, activation='elu')) # 128
  15. # model.add(BatchNormalization())
  16. # model.add(Dense(64, activation='elu')) # 128
  17. model.add(BatchNormalization())
  18. model.add(Dense(top_classes, activation='softmax'))
  19. model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
  20. print(model.summary())


  
  1. es = EarlyStopping(monitor='val_acc', verbose=1, patience=4)
  2. model.fit(X_train, y_train, batch_size=128, verbose=1, validation_split=0.15,callbacks=[es],epochs=25) # epochs=15, # batch_size=128


  
  1. from keras.layers import Conv1D, MaxPooling1D, Concatenate, Input
  2. from keras.models import Sequential,Model
  3. units = 256
  4. num_filters = 32
  5. filter_sizes=(3,5, 9,15,21)
  6. conv_blocks = []
  7. embedding_layer = Embedding(len(tokenizer.word_index)+1, embedding_dim, input_length=max_length)
  8. es2 = EarlyStopping(monitor='val_acc', verbose=1, patience=4)
  9. sequence_input = Input(shape=(max_length,), dtype='int32')
  10. embedded_sequences = embedding_layer(sequence_input)
  11. z = Dropout(0.1)(embedded_sequences)
  12. for sz in filter_sizes:
  13. conv = Conv1D(
  14. filters=num_filters,
  15. kernel_size=sz,
  16. padding="valid",
  17. activation="relu",
  18. strides=1)(z)
  19. conv = MaxPooling1D(pool_size=2)(conv)
  20. conv = Flatten()(conv)
  21. conv_blocks.append(conv)
  22. z = Concatenate()(conv_blocks) if len(conv_blocks) > 1 else conv_blocks[0]
  23. z = Dropout(0.25)(z)
  24. z = BatchNormalization()(z)
  25. z = Dense(units, activation="relu")(z)
  26. z = BatchNormalization()(z)
  27. predictions = Dense(top_classes, activation="softmax")(z)
  28. model2 = Model(sequence_input, predictions)
  29. model2.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
  30. print(model2.summary())
  31. model2.fit(X_train, y_train, batch_size=64, verbose=1, validation_split=0.15,callbacks=[es],epochs=30)


  
  1. from sklearn.metrics import confusion_matrix
  2. from sklearn.metrics import accuracy_score
  3. from sklearn.metrics import classification_report
  4. import itertools
  5. train_pred = model.predict(X_train)
  6. test_pred = model.predict(X_test)
  7. print("train-acc = " + str(accuracy_score(np.argmax(y_train, axis=1), np.argmax(train_pred, axis=1))))
  8. print("test-acc = " + str(accuracy_score(np.argmax(y_test, axis=1), np.argmax(test_pred, axis=1))))
  9. # Compute confusion matrix
  10. cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(test_pred, axis=1))
  11. # Plot normalized confusion matrix
  12. cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
  13. np.set_printoptions(precision=2)
  14. plt.figure(figsize=(10,10))
  15. plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
  16. plt.title('Confusion matrix')
  17. plt.colorbar()
  18. tick_marks = np.arange(len(lb.classes_))
  19. plt.xticks(tick_marks, lb.classes_, rotation=90)
  20. plt.yticks(tick_marks, lb.classes_)
  21. #for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
  22. # plt.text(j, i, format(cm[i, j], '.2f'), horizontalalignment="center", color="white" if cm[i, j] > cm.max() / 2. else "black")
  23. plt.ylabel('True label')
  24. plt.xlabel('Predicted label')
  25. plt.show()
  26. print(classification_report(np.argmax(y_test, axis=1), np.argmax(test_pred, axis=1), target_names=lb.classes_))


  
  1. train_pred = model2.predict(X_train)
  2. test_pred = model2.predict(X_test)
  3. print("train-acc = " + str(accuracy_score(np.argmax(y_train, axis=1), np.argmax(train_pred, axis=1))))
  4. print("test-acc = " + str(accuracy_score(np.argmax(y_test, axis=1), np.argmax(test_pred, axis=1))))
  5. # Compute confusion matrix
  6. cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(test_pred, axis=1))
  7. # Plot normalized confusion matrix
  8. cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
  9. np.set_printoptions(precision=2)
  10. plt.figure(figsize=(10,10))
  11. plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
  12. plt.title('Confusion matrix')
  13. plt.colorbar()
  14. tick_marks = np.arange(len(lb.classes_))
  15. plt.xticks(tick_marks, lb.classes_, rotation=90)
  16. plt.yticks(tick_marks, lb.classes_)
  17. #for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
  18. # plt.text(j, i, format(cm[i, j], '.2f'), horizontalalignment="center", color="white" if cm[i, j] > cm.max() / 2. else "black")
  19. plt.ylabel('True label')
  20. plt.xlabel('Predicted label')
  21. plt.show()
  22. print(classification_report(np.argmax(y_test, axis=1), np.argmax(test_pred, axis=1), target_names=lb.classes_))


参考资料:

1. https://www.rcsb.org

2. https://www.kaggle.com

 

文章来源: drugai.blog.csdn.net,作者:DrugAI,版权归原作者所有,如需转载,请联系作者。

原文链接:drugai.blog.csdn.net/article/details/100690926

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。