avatar
M*t
2
【 以下文字转载自 EE 讨论区 】
发信人: Mrabbit (Zoolander), 信区: EE
标 题: Pattern recognition problem
发信站: BBS 未名空间站 (Wed Jul 30 15:44:16 2008), 转信
I used Euclidean distance to measure the similarity between two feature vect
ors. The feature vector includes continuous variables with values like 100,
and discrete variables 1/0. So, I guess when comparing the similarity of two
vectors, I need to scale the values in the feature vector...Am I correct? I
f yes, how to scale it?
avatar
y*o
3
我现在用的是LINKSYS WRT54G。
我想要一个可以设置成屏蔽除了X, Y, Z网站的所有网站的路由器。
多数路由器只能设置成
1. 屏蔽网址1, 网址2,网址3,其余所有通过。 ---不是我要的。
2. 屏蔽所有网站。---- 不是我要的。
我要的是,只允许访问我开出的清单中的网站。
你想访问 games.com? 对不起,不在我的列表,不予访问。youtube? 不在列表,访问
不了。Wikipedia? 在我的列表里面,可以放行。
我的研究是好像只有DLINK DIR 625有这个功能,但是以前用过DLINK, 非常不稳定,很
不喜欢。
我只要router level的控制,编辑hosts, 或者微软的family security就免了。
请推荐,谢谢。
avatar
s*e
4
you can scale all features to [0,1]. e.g., for each feature,
divide by the maximum value of that feature.
Or you can scale it by substracting mean from each feature and
dividing by the covariance of that feature.

vect
,
two
I

【在 M*****t 的大作中提到】
: 【 以下文字转载自 EE 讨论区 】
: 发信人: Mrabbit (Zoolander), 信区: EE
: 标 题: Pattern recognition problem
: 发信站: BBS 未名空间站 (Wed Jul 30 15:44:16 2008), 转信
: I used Euclidean distance to measure the similarity between two feature vect
: ors. The feature vector includes continuous variables with values like 100,
: and discrete variables 1/0. So, I guess when comparing the similarity of two
: vectors, I need to scale the values in the feature vector...Am I correct? I
: f yes, how to scale it?

avatar
e*i
5
http://www.dd-wrt.com/wiki/index.php/Blocking_URLs/IPs

【在 y********o 的大作中提到】
: 我现在用的是LINKSYS WRT54G。
: 我想要一个可以设置成屏蔽除了X, Y, Z网站的所有网站的路由器。
: 多数路由器只能设置成
: 1. 屏蔽网址1, 网址2,网址3,其余所有通过。 ---不是我要的。
: 2. 屏蔽所有网站。---- 不是我要的。
: 我要的是,只允许访问我开出的清单中的网站。
: 你想访问 games.com? 对不起,不在我的列表,不予访问。youtube? 不在列表,访问
: 不了。Wikipedia? 在我的列表里面,可以放行。
: 我的研究是好像只有DLINK DIR 625有这个功能,但是以前用过DLINK, 非常不稳定,很
: 不喜欢。

avatar
M*t
6
got it....thank you a bunch!!!

【在 s******e 的大作中提到】
: you can scale all features to [0,1]. e.g., for each feature,
: divide by the maximum value of that feature.
: Or you can scale it by substracting mean from each feature and
: dividing by the covariance of that feature.
:
: vect
: ,
: two
: I

avatar
j*n
8
我觉着吧, 没啥办法。你像上面 那么 normalize, 相当于 把每个
feature 的 权值 搞成一样的了, 但是 feature 的 权值 一般可不一样
avatar
l*s
9
我把我的WRT_160N的fivmware刷成DD-WRT 了.很容易,但会影响保修

【在 y********o 的大作中提到】
: 谢谢你,这个有点复杂,好像是要更新路由器之firmware, 没有做过。
avatar
v*e
10
re
这么搞出来分类结果可能会很不一样。

【在 j****n 的大作中提到】
: 我觉着吧, 没啥办法。你像上面 那么 normalize, 相当于 把每个
: feature 的 权值 搞成一样的了, 但是 feature 的 权值 一般可不一样

avatar
M*t
11
ft...so, what is the best way to deal with this? The weight of each feature
is not known.....

【在 v********e 的大作中提到】
: re
: 这么搞出来分类结果可能会很不一样。

avatar
d*e
12
I think if you do classification, CART may be better in such case.
Euclidean distance is usually used when the samples are assumed in a cetrain
continuous distribution.

vect
,
two
I

【在 M*****t 的大作中提到】
: 【 以下文字转载自 EE 讨论区 】
: 发信人: Mrabbit (Zoolander), 信区: EE
: 标 题: Pattern recognition problem
: 发信站: BBS 未名空间站 (Wed Jul 30 15:44:16 2008), 转信
: I used Euclidean distance to measure the similarity between two feature vect
: ors. The feature vector includes continuous variables with values like 100,
: and discrete variables 1/0. So, I guess when comparing the similarity of two
: vectors, I need to scale the values in the feature vector...Am I correct? I
: f yes, how to scale it?

avatar
M*t
13
Sorry, what is CART? Is it also a metric to measure the similarity between t
wo feature vectors? I do use it for classification purpose.

cetrain

【在 d******e 的大作中提到】
: I think if you do classification, CART may be better in such case.
: Euclidean distance is usually used when the samples are assumed in a cetrain
: continuous distribution.
:
: vect
: ,
: two
: I

avatar
h*e
14
CART = Classification And Regression Tree

t

【在 M*****t 的大作中提到】
: Sorry, what is CART? Is it also a metric to measure the similarity between t
: wo feature vectors? I do use it for classification purpose.
:
: cetrain

avatar
p*r
15
I think what drbunie mean is that CART builds its decision on each feature
independently. Because each feature is selected as a node to make "left" and
"right" decisions individually, we no longer need to worry about the
scaling in Euclidean space.
Based on the above reasoning, Naive Bayes should also do the job. Any
comments?

t

【在 M*****t 的大作中提到】
: Sorry, what is CART? Is it also a metric to measure the similarity between t
: wo feature vectors? I do use it for classification purpose.
:
: cetrain

avatar
H*S
16
In my opinion, DT and NBC are all weak learners. If a much stronger learner
is required to accomplish the job, SVM or Boosting Algorithm are suggested.
Further based on your class example distribution, some imbalanced mining
tricks can be applied to improve the overall performance. That's my 2 cents
avatar
M*t
17
For imbalanced dataset (say, 1%:99% distribution), which classification algo
rithms can be used to achieve good accuracy?

learner
.
cents

【在 H****S 的大作中提到】
: In my opinion, DT and NBC are all weak learners. If a much stronger learner
: is required to accomplish the job, SVM or Boosting Algorithm are suggested.
: Further based on your class example distribution, some imbalanced mining
: tricks can be applied to improve the overall performance. That's my 2 cents

avatar
s*e
18
1% to 99% is way too imbalanced...
usually for imbalanced data set, you can do
sampling, e.g., over-sampling from the minor
class or under-samplin gfrom the major class.

algo

【在 M*****t 的大作中提到】
: For imbalanced dataset (say, 1%:99% distribution), which classification algo
: rithms can be used to achieve good accuracy?
:
: learner
: .
: cents

相关阅读
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。