Do you think a horse is acerouse or non-acerouse:
A: non-acerouse
supervised classification examples:
A1: form an album of tagged photos, recognize someone in a picture
+
A2: given someone's music choices and a bunch of features of that music(tempo, genre, etc) recommend a new song
features visualization:
A: she likes those
classification by eye:
A: unclear
speed scatterplot: Grade and bumpiness:
A: smooth, flat
speed scaterplot 2:
A: medium, very steep
speed scaterplot 3:
A: bad flat
from scatterplot to predictions:
A: more like this
from scatterplot to predictions 2:
A: unclear
from scatterplot to decision surfaces:
A:Red cross
A good linear decision surface:
A: the line that is going between the blue and red
GussianNB Deployment on terrain data:
select the GussianNb.py file(this one will be a bit tricky) and past this under the defined function:
clf = GaussianNB()
clf.fit(features_train, labels_train)
return clf
Calculating NB accuracy:
select studentCode.py and past this under def submitAccuracy():
accuracy = NBAccuracy(features_train, labels_train, features_test, labels_test)
return accuracy
Cancer test:
A: 8%
prior and posterior:
A1: p(c|pos) = p(c).p(pos|c) = 0.099
A2: p(not(c)|pos) = p(not(c)).p(pos|not(c)) = 0.009
normalizing1:
normalize = 0.108
normalizing2:
p(c|pos) = 0.083333333
normalizing3:
p(not(c)|pos) = 0.916666667
total probability:
posterior = 1
bayes rule for classification:
A: SARA
chris or sara:
A: CHRIS
posterior probabelities:
A1: p(CHRIS|"Life Deal") = 0.571428571
A2: p(SARA|"Life Deal") =0.428571429
bayesian probabilities on your own:
A1: p(CHRIS|"Love Deal") = 0.444444444
A2: p(SARA|"Love Deal") =0.555555556
why is naive bayes naive?:
A: word order
what is the accuracy of your naive bayes author identifier?:
A: 0.9732650739476678
what is faster training your data or making your predictions?
A: predicting
A: non-acerouse
supervised classification examples:
A1: form an album of tagged photos, recognize someone in a picture
+
A2: given someone's music choices and a bunch of features of that music(tempo, genre, etc) recommend a new song
features visualization:
A: she likes those
classification by eye:
A: unclear
speed scatterplot: Grade and bumpiness:
A: smooth, flat
speed scaterplot 2:
A: medium, very steep
speed scaterplot 3:
A: bad flat
from scatterplot to predictions:
A: more like this
from scatterplot to predictions 2:
A: unclear
from scatterplot to decision surfaces:
A:Red cross
A good linear decision surface:
A: the line that is going between the blue and red
GussianNB Deployment on terrain data:
select the GussianNb.py file(this one will be a bit tricky) and past this under the defined function:
clf = GaussianNB()
clf.fit(features_train, labels_train)
return clf
Calculating NB accuracy:
select studentCode.py and past this under def submitAccuracy():
accuracy = NBAccuracy(features_train, labels_train, features_test, labels_test)
return accuracy
Cancer test:
A: 8%
prior and posterior:
A1: p(c|pos) = p(c).p(pos|c) = 0.099
A2: p(not(c)|pos) = p(not(c)).p(pos|not(c)) = 0.009
normalizing1:
normalize = 0.108
normalizing2:
p(c|pos) = 0.083333333
normalizing3:
p(not(c)|pos) = 0.916666667
total probability:
posterior = 1
bayes rule for classification:
A: SARA
chris or sara:
A: CHRIS
posterior probabelities:
A1: p(CHRIS|"Life Deal") = 0.571428571
A2: p(SARA|"Life Deal") =0.428571429
bayesian probabilities on your own:
A1: p(CHRIS|"Love Deal") = 0.444444444
A2: p(SARA|"Love Deal") =0.555555556
why is naive bayes naive?:
A: word order
what is the accuracy of your naive bayes author identifier?:
A: 0.9732650739476678
what is faster training your data or making your predictions?
A: predicting
Comments
Post a Comment