p(x)=Πnj=1p(xj;μj,σ2j)
p(x)=∏nj=1p(xj;\muj,σ2j).
∂J∂X(i)k=∑[]θ(j)k
∂J∂θ(i)k=∑[]X(j)k
\begin{itemize}
\item WRONG You're an artist and hand-paint portraits for your clients. Each client gets a different portrait (of themselves) and gives you 1-5 star rating feedback, and each client purchases at most 1 portrait. You'd like to predict what rating your ne$ \times $t customer will give you.
\item SELECTED You manage an online bookstore and you have the book ratings from many users. You want to learn to predict the e$ \times $pected sales volume (number of books sold) as a function of the average rating of a book.
\item SELECTED You own a clothing store that sells many styles and brands of jeans. You have collected reviews of the different styles and brands from frequent shoppers, and you want to use these reviews to offer those shoppers discounts on the jeans you think they are most likely to purchase
\item WRONG You run an online bookstore and collect the ratings of many users. You want to use this to identify what books are "similar" to each other (i.e., if one user likes a certain book, what are other books that she might also like?)
\end{itemize}
\begin{itemize}
\item You can combine all three training sets into one without any modification and e$ \times $pect high performance from a recommendation system.
\item It is not possible to combine these websites' data. You must build three separate recommendation systems.
\item CORRECT You can merge the three datasets into one, but you should first normalize each dataset separately by subtracting the mean and then dividing by (ma$ \times $ - min) where the ma$ \times $ and min (5-1) or (10-1) or (100-1) for the three websites respectively.
\item You can combine all three training sets into one as long as your perform mean normalization and feature scaling after you merge the data.
\end{itemize}
\begin{itemize}
\item WRONG Suppose you are writing a recommender system to predict a user's book preferences. In order to build such a system, you need that user to rate all the other books in your training set.
\item CORRECT For collaborative filtering, it is possible to use one of the advanced optimization algoirthms (L-BFGS/conjugate gradient/etc.) to solve for both the $ \times $(i)'s and $\theta(j)$'s simultaneously.
\item CORRECT Even if each user has rated only a small fraction of all of your products (so r(i,j)=0 for the vast majority of (i,j) pairs), you can still build a recommender system by using collaborative filtering.
\item WRONG For collaborative filtering, the optimization algorithm you should use is gradient descent. In particular, you cannot use more advanced optimization algorithms (L-BFGS/conjugate gradient/etc.) for collaborative filtering, since you have to solve for both the $ \times $(i)'s and $\theta(j)$'s simultaneously.
\end{itemize}
\begin{itemize}
\item CORRECT total = sum(sum((A * B) .* R))
\item CORRECT C = (A * B) .* R; total = sum(C(:));
\item WRONG total = sum(sum((A * B) * R));
\item WRONG C = (A * B) * R; total = sum(C(:));
\end{itemize}
文章来源: hiszm.blog.csdn.net,作者:孙中明,版权归原作者所有,如需转载,请联系作者。
原文链接:hiszm.blog.csdn.net/article/details/77914511
【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
评论(0)