update lists
This commit is contained in:
@@ -1,433 +0,0 @@
|
||||
[38;5;12m [39m[38;2;255;187;0m[1m[4mawesome-metric-learning[0m
|
||||
[38;5;12m😎 Awesome list about practical Metric Learning and its applications[39m
|
||||
|
||||
[38;2;255;187;0m[4mMotivation 🤓[0m
|
||||
[38;5;12mAt[39m[38;5;12m [39m[38;5;12mQdrant,[39m[38;5;12m [39m[38;5;12mwe[39m[38;5;12m [39m[38;5;12mhave[39m[38;5;12m [39m[38;5;12mone[39m[38;5;12m [39m[38;5;12mgoal:[39m[38;5;12m [39m[38;5;12mmake[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mmore[39m[38;5;12m [39m[38;5;12mpractical.[39m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12mlisting[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mline[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mthis[39m[38;5;12m [39m[38;5;12mpurpose,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mwe[39m[38;5;12m [39m[38;5;12maim[39m[38;5;12m [39m[38;5;12mat[39m[38;5;12m [39m[38;5;12mproviding[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mconcise[39m[38;5;12m [39m[38;5;12myet[39m[38;5;12m [39m[38;5;12museful[39m[38;5;12m [39m[38;5;12mlist[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mawesomeness[39m[38;5;12m [39m[38;5;12maround[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning.[39m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mintended[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12minspirational[39m[38;5;12m [39m[38;5;12mfor[39m
|
||||
[38;5;12mproductivity[39m[38;5;12m [39m[38;5;12mrather[39m[38;5;12m [39m[38;5;12mthan[39m[38;5;12m [39m[38;5;12mserve[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mfull[39m[38;5;12m [39m[38;5;12mbibliography.[39m
|
||||
|
||||
[38;5;12mIf you find it useful or like it in some other way, you may want to join our Discord server, where we are running a paper reading club on metric learning.[39m
|
||||
|
||||
|
||||
[48;5;235m[38;5;249m[49m[39m
|
||||
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mContributing 🤩[0m
|
||||
[38;5;12mIf you want to contribute to this project, but don't know how, you may want to check out the [39m[38;5;14m[1mcontributing guide[0m[38;5;12m (/CONTRIBUTING.md). It's easy! 😌[39m
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mSurveys 📖[0m
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mhas[39m[38;5;12m [39m[38;5;12mproceeding[39m[38;5;12m [39m[38;5;12mguides[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;14m[1msupervised[0m[38;5;12m [39m[38;5;12m(http://contrib.scikit-learn.org/metric-learn/supervised.html),[39m[38;5;12m [39m[38;5;14m[1mweakly[0m[38;5;14m[1m [0m[38;5;14m[1msupervised[0m[38;5;12m [39m[38;5;12m(http://contrib.scikit-learn.org/metric-learn/weakly_supervised.html)[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;14m[1munsupervised[0m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12m(http://contrib.scikit-learn.org/metric-learn/unsupervised.html)[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12malgorithms[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[48;5;235m[38;5;249m[1mmetric_learn[0m[38;5;12m [39m[38;5;12m(http://contrib.scikit-learn.org/metric-learn/metric_learn.html)[39m[38;5;12m [39m[38;5;12mpackage.[39m
|
||||
|
||||
|
||||
|
||||
[38;5;12m - A comprehensive [39m
|
||||
[38;5;12mstudy for newcomers.[39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mFactors such as sampling strategies, distance metrics, and network structures are systematically analyzed by comparing the quantitative results of the methods.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt discusses the need for metric learning, old and state-of-the-art approaches, and some real-world use cases.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mApplications 🎮[0m
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mCLIP offers state-of-the-art zero-shot image classification and image retrieval with a natural language query. See [39m[38;5;14m[1mdemo[0m[38;5;12m (https://colab.research.google.com/github/openai/clip/blob/master/notebooks/Interacting_with_CLIP.ipynb).[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThis work achieves zero-shot classification and cross-modal audio retrieval from natural language queries.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt is an open-class object detector to detect any label encoded by CLIP without finetuning. See [39m[38;5;14m[1mdemo[0m[38;5;12m (https://huggingface.co/spaces/akhaliq/Detic).[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mTensorFlow Hub offers a collection of pretrained models from the paper [39m[38;5;14m[1mLarge Dual Encoders Are Generalizable Retrievers[0m[38;5;12m (https://arxiv.org/abs/2112.07899).[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mGTR models are first initialized from a pre-trained T5 checkpoint. They are then further pre-trained with a set of community question-answer pairs. Finally, they are fine-tuned on the MS Marco dataset.[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe two encoders are shared so the GTR model functions as a single text encoder. The input is variable-length English text and the output is a 768-dimensional vector.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe method and pretrained models found in Flair go beyond zero-shot sequence classification and offers zero-shot span tagging abilities for tasks such as named entity recognition and part of speech tagging.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mleverages[39m[38;5;12m [39m[38;5;12mHuggingFace[39m[38;5;12m [39m[38;5;12mTransformers[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mc-TF-IDF[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mcreate[39m[38;5;12m [39m[38;5;12mdense[39m[38;5;12m [39m[38;5;12mclusters[39m[38;5;12m [39m[38;5;12mallowing[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12measily[39m[38;5;12m [39m[38;5;12minterpretable[39m[38;5;12m [39m[38;5;12mtopics[39m[38;5;12m [39m[38;5;12mwhile[39m[38;5;12m [39m[38;5;12mkeeping[39m[38;5;12m [39m[38;5;12mimportant[39m[38;5;12m [39m[38;5;12mwords[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mtopic[39m[38;5;12m [39m[38;5;12mdescriptions.[39m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12msupports[39m[38;5;12m [39m[38;5;12mguided,[39m[38;5;12m [39m[38;5;12m(semi-)[39m[38;5;12m [39m[38;5;12msupervised,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mdynamic[39m[38;5;12m [39m[38;5;12mtopic[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mmodeling[39m[38;5;12m [39m[38;5;12mbeautiful[39m[38;5;12m [39m[38;5;12mvisualizations.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIdentification[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12msubstances[39m[38;5;12m [39m[38;5;12mbased[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mspectral[39m[38;5;12m [39m[38;5;12manalysis[39m[38;5;12m [39m[38;5;12mplays[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mvital[39m[38;5;12m [39m[38;5;12mrole[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mforensic[39m[38;5;12m [39m[38;5;12mscience.[39m[38;5;12m [39m[38;5;12mSimilarly,[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mmaterial[39m[38;5;12m [39m[38;5;12midentification[39m[38;5;12m [39m[38;5;12mprocess[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mparamount[39m[38;5;12m [39m[38;5;12mimportance[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mmalfunction[39m[38;5;12m [39m[38;5;12mreasoning[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mmanufacturing[39m[38;5;12m [39m[38;5;12msectors[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmaterials[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mresearch.[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12mmodels[39m[38;5;12m [39m[38;5;12menables[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12midentify[39m[38;5;12m [39m[38;5;12mmaterials[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mdeep[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mapplied[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mX-Ray[39m[38;5;12m [39m[38;5;12mDiffraction[39m[38;5;12m [39m[38;5;12m(XRD)[39m[38;5;12m [39m[38;5;12mspectrum.[39m[38;5;12m [39m[38;5;12mRead[39m[38;5;12m [39m[38;5;14m[1mthis[0m[38;5;14m[1m [0m[38;5;14m[1mpost[0m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12m(https://towardsdatascience.com/automatic-spectral-identification-using-deep-metric-learning-with-1d-regnet-and-adacos-8b7fb36f2d5f)[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mmore[39m[38;5;12m [39m[38;5;12mbackground.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mDifferent[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mtypical[39m[38;5;12m [39m[38;5;12minformation[39m[38;5;12m [39m[38;5;12mretrieval[39m[38;5;12m [39m[38;5;12mtasks,[39m[38;5;12m [39m[38;5;12mcode[39m[38;5;12m [39m[38;5;12msearch[39m[38;5;12m [39m[38;5;12mrequires[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mbridge[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12msemantic[39m[38;5;12m [39m[38;5;12mgap[39m[38;5;12m [39m[38;5;12mbetween[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mprogramming[39m[38;5;12m [39m[38;5;12mlanguage[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mnatural[39m[38;5;12m [39m[38;5;12mlanguage,[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mbetter[39m[38;5;12m [39m[38;5;12mdescribing[39m[38;5;12m [39m[38;5;12mintrinsic[39m[38;5;12m [39m[38;5;12mconcepts[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12msemantics.[39m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mrepository[39m[38;5;12m [39m[38;5;12mprovides[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mpretrained[39m[38;5;12m [39m[38;5;12mmodels[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12msource[39m[38;5;12m [39m[38;5;12mcode[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;14m[1mLearning[0m[38;5;14m[1m [0m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mSemantic[0m[38;5;14m[1m [0m[38;5;14m[1mModel[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mCode[0m[38;5;14m[1m [0m[38;5;14m[1mSearch[0m[38;5;14m[1m [0m[38;5;14m[1musing[0m[38;5;14m[1m [0m[38;5;14m[1mCodeSearchNet[0m[38;5;14m[1m [0m[38;5;14m[1mCorpus[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/2201.11313),[39m[38;5;12m [39m[38;5;12mwhere[39m[38;5;12m [39m[38;5;12mthey[39m[38;5;12m [39m[38;5;12mapply[39m[38;5;12m [39m[38;5;12mseveral[39m[38;5;12m [39m[38;5;12mtricks[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12machieve[39m[38;5;12m [39m[38;5;12mthis.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mState-of-the-art[39m[38;5;12m [39m[38;5;12mmethods[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mincapable[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mleveraging[39m[38;5;12m [39m[38;5;12mattributes[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mtypes[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mitems[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mthus[39m[38;5;12m [39m[38;5;12msuffer[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12msparsity[39m[38;5;12m [39m[38;5;12mproblems[39m[38;5;12m [39m[38;5;12mbecause[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mquite[39m[38;5;12m [39m[38;5;12mchallenging[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mrepresent[39m[38;5;12m [39m[38;5;12mitems[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mfeature[39m[38;5;12m [39m[38;5;12mspaces[39m[38;5;12m [39m[38;5;12mjointly.[39m[38;5;12m [39m[38;5;12mTo[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mtackle[39m[38;5;12m [39m[38;5;12mthis[39m[38;5;12m [39m[38;5;12mproblem,[39m[38;5;12m [39m[38;5;12mthey[39m[38;5;12m [39m[38;5;12mpropose[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mkernel-based[39m[38;5;12m [39m[38;5;12mneural[39m[38;5;12m [39m[38;5;12mnetwork,[39m[38;5;12m [39m[38;5;12mnamely[39m[38;5;12m [39m[38;5;12mdeep[39m[38;5;12m [39m[38;5;12munified[39m[38;5;12m [39m[38;5;12mrepresentation[39m[38;5;12m [39m[38;5;12m(DURation)[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mheterogeneous[39m[38;5;12m [39m[38;5;12mrecommendation,[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mjointly[39m[38;5;12m [39m[38;5;12mmodel[39m[38;5;12m [39m[38;5;12munified[39m[38;5;12m [39m[38;5;12mrepresentations[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mheterogeneous[39m[38;5;12m [39m[38;5;12mitems[39m[38;5;12m [39m[38;5;12mwhile[39m[38;5;12m [39m[38;5;12mpreserving[39m[38;5;12m [39m[38;5;12mtheir[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12moriginal[39m[38;5;12m [39m[38;5;12mfeature[39m[38;5;12m [39m[38;5;12mspace[39m[38;5;12m [39m[38;5;12mtopology[39m[38;5;12m [39m[38;5;12mstructures.[39m[38;5;12m [39m[38;5;12mSee[39m[38;5;12m [39m[38;5;14m[1mpaper[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/2201.05861).[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt provides the implementation of [39m[38;5;14m[1mItem2Vec: Neural Item Embedding for Collaborative Filtering[0m[38;5;12m (https://arxiv.org/abs/1603.04259), wrapped as a [39m[48;5;235m[38;5;249msklearn[49m[39m[38;5;12m estimator compatible with [39m[48;5;235m[38;5;249mGridSearchCV[49m[39m[38;5;12m and [39m[48;5;235m[38;5;249mBayesSearchCV[49m[39m[38;5;12m for hyperparameter tuning.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mYou can search for the overall closest fit, or choose to focus matching genre, mood, or instrumentation.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt searches phrase-level answers to your questions in real-time or retrieves passages for downstream tasks. Check out [39m[38;5;14m[1mdemo[0m[38;5;12m (http://densephrases.korea.ac.kr/), or see [39m[38;5;14m[1mpaper[0m[38;5;12m (https://arxiv.org/abs/2109.08133).[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mInstead of leveraging NLI/XNLI, they make use of the text encoder of the CLIP model, concluding from casual experiments that this sometimes gives better accuracy than NLI-based models.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mApplication[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mSimCLR[39m[38;5;12m [39m[38;5;12mmethod[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mmusical[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mout-of-domain[39m[38;5;12m [39m[38;5;12mgeneralization[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mmillion-scale[39m[38;5;12m [39m[38;5;12mmusic[39m[38;5;12m [39m[38;5;12mclassification.[39m[38;5;12m [39m[38;5;12mSee[39m[38;5;12m [39m[38;5;14m[1mdemo[0m[38;5;12m [39m[38;5;12m(https://spijkervet.github.io/CLMR/examples/clmr-onnxruntime-web/)[39m[38;5;12m [39m[38;5;12mor[39m[38;5;12m [39m[38;5;14m[1mpaper[0m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/2103.09410).[39m
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mCase Studies ✍️[0m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mLibraries 🧰[0m
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mQuaterion[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mframework[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mfine-tuning[39m[38;5;12m [39m[38;5;12msimilarity[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mmodels.[39m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mframework[39m[38;5;12m [39m[38;5;12mcloses[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12m"last[39m[38;5;12m [39m[38;5;12mmile"[39m[38;5;12m [39m[38;5;12mproblem[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mtraining[39m[38;5;12m [39m[38;5;12mmodels[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12msemantic[39m[38;5;12m [39m[38;5;12msearch,[39m[38;5;12m [39m[38;5;12mrecommendations,[39m[38;5;12m [39m[38;5;12manomaly[39m[38;5;12m [39m[38;5;12mdetection,[39m[38;5;12m [39m[38;5;12mextreme[39m[38;5;12m [39m[38;5;12mclassification,[39m[38;5;12m [39m[38;5;12mmatching[39m[38;5;12m [39m[38;5;12mengines,[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12me.t.c.[39m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mdesigned[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mcombine[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mperformance[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mpre-trained[39m[38;5;12m [39m[38;5;12mmodels[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mspecialization[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mcustom[39m[38;5;12m [39m[38;5;12mtask[39m[38;5;12m [39m[38;5;12mwhile[39m[38;5;12m [39m[38;5;12mavoiding[39m[38;5;12m [39m[38;5;12mslow[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mcostly[39m[38;5;12m [39m[38;5;12mtraining.[39m
|
||||
|
||||
|
||||
|
||||
[38;5;12m - A library for [39m
|
||||
[38;5;12msentence-level embeddings. [39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mDeveloped on top of the well-known [39m[38;5;14m[1mTransformers[0m[38;5;12m (https://github.com/huggingface/transformers) library, it provides an easy way to finetune Transformer-based models to obtain sequence-level embeddings.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[48;5;235m[38;5;249m[49m[39m
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe goal of MatchZoo is to provide a high-quality codebase for deep text matching research, such as document retrieval, question answering, conversational response ranking, and paraphrase identification.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;12m - A metric learning library in [39m
|
||||
[38;5;12mTensorFlow with a Keras-like API.[39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt provides support for self-supervised contrastive learning and state-of-the-art methods such as SimCLR, SimSian, and Barlow Twins.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mA PyTorch library to train and inference with contextually-keyed word vectors augmented with part-of-speech tags to achieve multi-word queries.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mA PyTorch library to efficiently train self-supervised computer vision models with state-of-the-art techniques such as SimCLR, SimSian, Barlow Twins, BYOL, among others.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mA library that helps you benchmark pretrained and custom embedding models on tens of datasets and tasks with ease.[39m
|
||||
|
||||
|
||||
|
||||
[38;5;12m - A Python implementation of a number of popular [39m
|
||||
[38;5;12mrecommender algorithms. [39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt supports incorporating user and item features to the traditional matrix factorization. It represents users and items as a sum of the latent representations of their features, thus achieving a better generalization.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mprovides[39m[38;5;12m [39m[38;5;12mefficient[39m[38;5;12m [39m[38;5;12mmulticore[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmemory-independent[39m[38;5;12m [39m[38;5;12mimplementations[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mpopular[39m[38;5;12m [39m[38;5;12malgorithms,[39m[38;5;12m [39m[38;5;12msuch[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12monline[39m[38;5;12m [39m[38;5;12mLatent[39m[38;5;12m [39m[38;5;12mSemantic[39m[38;5;12m [39m[38;5;12mAnalysis[39m[38;5;12m [39m[38;5;12m(LSA/LSI/SVD),[39m[38;5;12m [39m[38;5;12mLatent[39m[38;5;12m [39m[38;5;12mDirichlet[39m[38;5;12m [39m[38;5;12mAllocation[39m[38;5;12m [39m[38;5;12m(LDA),[39m[38;5;12m [39m[38;5;12mRandom[39m[38;5;12m [39m[38;5;12mProjections[39m[38;5;12m [39m[38;5;12m(RP),[39m[38;5;12m [39m[38;5;12mHierarchical[39m[38;5;12m [39m[38;5;12mDirichlet[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mProcess[39m[38;5;12m [39m[38;5;12m(HDP)[39m[38;5;12m [39m[38;5;12mor[39m[38;5;12m [39m[38;5;12mword2vec.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt provides implementations of algorithms such as KNN, LFM, SLIM, NeuMF, FM, DeepFM, VAE and so on, in order to ensure fair comparison of recommender system benchmarks.[39m
|
||||
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mTools ⚒️[0m
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt supports UMAP, T-SNE, PCA, or custom techniques to analyze embeddings of encoders.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mallows[39m[38;5;12m [39m[38;5;12myou[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mvisualize[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12membedding[39m[38;5;12m [39m[38;5;12mspace[39m[38;5;12m [39m[38;5;12mselecting[39m[38;5;12m [39m[38;5;12mexplicitly[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12maxis[39m[38;5;12m [39m[38;5;12mthrough[39m[38;5;12m [39m[38;5;12malgebraic[39m[38;5;12m [39m[38;5;12mformulas[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12membeddings[39m[38;5;12m [39m[38;5;12m(like[39m[38;5;12m [39m[38;5;12mking-man+woman)[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mhighlight[39m[38;5;12m [39m[38;5;12mspecific[39m[38;5;12m [39m[38;5;12mitems[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12membedding[39m[38;5;12m [39m[38;5;12mspace.[39m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12malso[39m[38;5;12m [39m[38;5;12msupports[39m[38;5;12m [39m[38;5;12mimplicit[39m[38;5;12m [39m[38;5;12maxes[39m[38;5;12m [39m[38;5;12mvia[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mPCA[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mt-SNE.[39m[38;5;12m [39m[38;5;12mSee[39m[38;5;12m [39m[38;5;14m[1mpaper[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/1905.12099).[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mApproximate Nearest Neighbors ⚡[0m
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mprovides[39m[38;5;12m [39m[38;5;12mbenchmarking[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m20+[39m[38;5;12m [39m[38;5;12mANN[39m[38;5;12m [39m[38;5;12malgorithms[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mnine[39m[38;5;12m [39m[38;5;12mstandard[39m[38;5;12m [39m[38;5;12mdatasets[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12msupport[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mbring[39m[38;5;12m [39m[38;5;12myour[39m[38;5;12m [39m[38;5;12mdataset.[39m[38;5;12m [39m[38;5;12m([39m[38;5;14m[1mMedium[0m[38;5;14m[1m [0m[38;5;14m[1mPost[0m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12m(https://medium.com/towards-artificial-intelligence/how-to-choose-the-best-nearest-neighbors-algorithm-8d75d42b16ab?sk=889bc0006f5ff773e3a30fa283d91ee7))[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt is not the fastest ANN algorithm but achieves memory efficiency thanks to various quantization and indexing methods such as IVF, PQ, and IVF-PQ. ([39m[38;5;14m[1mTutorial[0m[38;5;12m (https://www.pinecone.io/learn/faiss-tutorial/))[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mstill[39m[38;5;12m [39m[38;5;12mone[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mfastest[39m[38;5;12m [39m[38;5;12mANN[39m[38;5;12m [39m[38;5;12malgorithms[39m[38;5;12m [39m[38;5;12mout[39m[38;5;12m [39m[38;5;12mthere,[39m[38;5;12m [39m[38;5;12mrequiring[39m[38;5;12m [39m[38;5;12mrelatively[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mhigher[39m[38;5;12m [39m[38;5;12mmemory[39m[38;5;12m [39m[38;5;12musage.[39m[38;5;12m [39m[38;5;12m(Paper:[39m[38;5;12m [39m[38;5;14m[1mEfficient[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mrobust[0m[38;5;14m[1m [0m[38;5;14m[1mapproximate[0m[38;5;14m[1m [0m[38;5;14m[1mnearest[0m[38;5;14m[1m [0m[38;5;14m[1mneighbor[0m[38;5;14m[1m [0m[38;5;14m[1msearch[0m[38;5;14m[1m [0m[38;5;14m[1musing[0m[38;5;14m[1m [0m[38;5;14m[1mHierarchical[0m[38;5;14m[1m [0m[38;5;14m[1mNavigable[0m[38;5;14m[1m [0m[38;5;14m[1mSmall[0m[38;5;14m[1m [0m[38;5;14m[1mWorld[0m[38;5;14m[1m [0m[38;5;14m[1mgraphs[0m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/1603.09320))[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mPaper: [39m[38;5;14m[1mAccelerating Large-Scale Inference with Anisotropic Vector Quantization[0m[38;5;12m (https://arxiv.org/abs/1908.10396)[39m
|
||||
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mPapers 🔬[0m
|
||||
|
||||
[38;5;12mDimensionality Reduction by [39m
|
||||
[38;5;12mLearning an Invariant Mapping[39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mPublished[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mYann[39m[38;5;12m [39m[38;5;12mLe[39m[38;5;12m [39m[38;5;12mCun[39m[38;5;12m [39m[38;5;12met[39m[38;5;12m [39m[38;5;12mal.[39m[38;5;12m [39m[38;5;12m(2005),[39m[38;5;12m [39m[38;5;12mits[39m[38;5;12m [39m[38;5;12mmain[39m[38;5;12m [39m[38;5;12mfocus[39m[38;5;12m [39m[38;5;12mwas[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mdimensionality[39m[38;5;12m [39m[38;5;12mreduction.[39m[38;5;12m [39m[38;5;12mHowever,[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mmethod[39m[38;5;12m [39m[38;5;12mproposed[39m[38;5;12m [39m[38;5;12mhas[39m[38;5;12m [39m[38;5;12mexcellent[39m[38;5;12m [39m[38;5;12mproperties[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12msuch[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mpreserving[39m[38;5;12m [39m[38;5;12mneighbourhood[39m[38;5;12m [39m[38;5;12mrelationships[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mgeneralization[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12munseen[39m[38;5;12m [39m[38;5;12mdata,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12mhas[39m[38;5;12m [39m[38;5;12mextensive[39m[38;5;12m [39m[38;5;12mapplications[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mgreat[39m[38;5;12m [39m[38;5;12mnumber[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mvariations[39m[38;5;12m [39m[38;5;12mever[39m[38;5;12m [39m[38;5;12msince.[39m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12madvised[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12myou[39m[38;5;12m [39m[38;5;12mread[39m[38;5;12m [39m[38;5;14m[1mthis[0m[38;5;14m[1m [0m[38;5;14m[1mgreat[0m[38;5;14m[1m [0m[38;5;14m[1mpost[0m[38;5;12m [39m[38;5;12m(https://medium.com/@maksym.bekuzarov/losses-explained-contrastive-loss-f8f57fe32246)[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mbetter[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12munderstand[39m[38;5;12m [39m[38;5;12mits[39m[38;5;12m [39m[38;5;12mimportance[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe paper introduces Triplet Loss, which can be seen as the "ImageNet moment" for deep metric learning. It is still one of the state-of-the-art methods and has a great number of applications in almost any data modality.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;12m - A novel loss function [39m
|
||||
[38;5;12mwith better properties.[39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt provides scale invariance, robustness against feature variance, and better convergence than Contrastive and Triplet Loss.[39m
|
||||
|
||||
|
||||
|
||||
[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mSupervised metric learning without pairs or triplets.[39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mAlthough[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12moriginally[39m[38;5;12m [39m[38;5;12mdesigned[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mface[39m[38;5;12m [39m[38;5;12mrecognition[39m[38;5;12m [39m[38;5;12mtask,[39m[38;5;12m [39m[38;5;12mthis[39m[38;5;12m [39m[38;5;12mloss[39m[38;5;12m [39m[38;5;12mfunction[39m[38;5;12m [39m[38;5;12machieves[39m[38;5;12m [39m[38;5;12mstate-of-the-art[39m[38;5;12m [39m[38;5;12mresults[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mmany[39m[38;5;12m [39m[38;5;12mother[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mproblems[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12msimpler[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mfaster[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mfeeding.[39m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12malso[39m[38;5;12m [39m[38;5;12mrobust[39m[38;5;12m [39m[38;5;12magainst[39m[38;5;12m [39m[38;5;12munclean[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12munbalanced[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mwhen[39m[38;5;12m [39m[38;5;12mmodified[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12msub-centers[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mdynamic[39m[38;5;12m [39m[38;5;12mmargin.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;12mVICReg: Variance-Invariance-Covariance Regularization for [39m
|
||||
[38;5;12mSelf-Supervised Learning[39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mpaper[39m[38;5;12m [39m[38;5;12mintroduces[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mmethod[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mexplicitly[39m[38;5;12m [39m[38;5;12mavoids[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mcollapse[39m[38;5;12m [39m[38;5;12mproblem[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mhigh[39m[38;5;12m [39m[38;5;12mdimensions[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12msimple[39m[38;5;12m [39m[38;5;12mregularization[39m[38;5;12m [39m[38;5;12mterm[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mvariance[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12membeddings[39m[38;5;12m [39m[38;5;12malong[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12mdimension[39m[38;5;12m [39m[38;5;12mindividually.[39m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12mnew[39m[38;5;12m [39m[38;5;12mterm[39m[38;5;12m [39m[38;5;12mcan[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12mincorporated[39m[38;5;12m [39m[38;5;12minto[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mother[39m[38;5;12m [39m[38;5;12mmethods[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mstabilize[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mtraining[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mperformance[39m[38;5;12m [39m[38;5;12mimprovements.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mpaper[39m[38;5;12m [39m[38;5;12mproposes[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mmean[39m[38;5;12m [39m[38;5;12mcentroid[39m[38;5;12m [39m[38;5;12mrepresentation[39m[38;5;12m [39m[38;5;12mduring[39m[38;5;12m [39m[38;5;12mtraining[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mretrieval[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mrobustness[39m[38;5;12m [39m[38;5;12magainst[39m[38;5;12m [39m[38;5;12moutliers[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmore[39m[38;5;12m [39m[38;5;12mstable[39m[38;5;12m [39m[38;5;12mfeatures.[39m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mfurther[39m[38;5;12m [39m[38;5;12mreduces[39m[38;5;12m [39m[38;5;12mretrieval[39m[38;5;12m [39m[38;5;12mtime[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mstorage[39m[38;5;12m [39m[38;5;12mrequirements,[39m[38;5;12m [39m[38;5;12mmaking[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12msuitable[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mproduction[39m[38;5;12m [39m[38;5;12mdeployments.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mIt demonstrates among other things that[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12m- composition of data augmentations plays a critical role - Random Crop + Random Color distortion provides the best downstream classifier accuracy,[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12m- introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations,[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12m- and Contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThey[39m[38;5;12m [39m[38;5;12malso[39m[38;5;12m [39m[38;5;12mincorporates[39m[38;5;12m [39m[38;5;12mannotated[39m[38;5;12m [39m[38;5;12mpairs[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mnatural[39m[38;5;12m [39m[38;5;12mlanguage[39m[38;5;12m [39m[38;5;12minference[39m[38;5;12m [39m[38;5;12mdatasets[39m[38;5;12m [39m[38;5;12minto[39m[38;5;12m [39m[38;5;12mtheir[39m[38;5;12m [39m[38;5;12mcontrastive[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mframework[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12msupervised[39m[38;5;12m [39m[38;5;12msetting,[39m[38;5;12m [39m[38;5;12mshowing[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mcontrastive[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mobjective[39m[38;5;12m [39m[38;5;12mregularizes[39m[38;5;12m [39m[38;5;12mpre-trained[39m[38;5;12m [39m[38;5;12membeddings’[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12manisotropic[39m[38;5;12m [39m[38;5;12mspace[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12mmore[39m[38;5;12m [39m[38;5;12muniform,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12mbetter[39m[38;5;12m [39m[38;5;12maligns[39m[38;5;12m [39m[38;5;12mpositive[39m[38;5;12m [39m[38;5;12mpairs[39m[38;5;12m [39m[38;5;12mwhen[39m[38;5;12m [39m[38;5;12msupervised[39m[38;5;12m [39m[38;5;12msignals[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mavailable.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[48;5;235m[38;5;249m[49m[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mMining[39m[38;5;12m [39m[38;5;12minformative[39m[38;5;12m [39m[38;5;12mnegative[39m[38;5;12m [39m[38;5;12minstances[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mcentral[39m[38;5;12m [39m[38;5;12mimportance[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mdeep[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12m(DML),[39m[38;5;12m [39m[38;5;12mhowever[39m[38;5;12m [39m[38;5;12mthis[39m[38;5;12m [39m[38;5;12mtask[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mintrinsically[39m[38;5;12m [39m[38;5;12mlimited[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mmini-batch[39m[38;5;12m [39m[38;5;12mtraining,[39m[38;5;12m [39m[38;5;12mwhere[39m[38;5;12m [39m[38;5;12monly[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mmini-batch[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12minstances[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12maccessible[39m[38;5;12m [39m[38;5;12mat[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12miteration.[39m[38;5;12m [39m[38;5;12mIn[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mthis[39m[38;5;12m [39m[38;5;12mpaper,[39m[38;5;12m [39m[38;5;12mwe[39m[38;5;12m [39m[38;5;12midentify[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12m"slow[39m[38;5;12m [39m[38;5;12mdrift"[39m[38;5;12m [39m[38;5;12mphenomena[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mobserving[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12membedding[39m[38;5;12m [39m[38;5;12mfeatures[39m[38;5;12m [39m[38;5;12mdrift[39m[38;5;12m [39m[38;5;12mexceptionally[39m[38;5;12m [39m[38;5;12mslow[39m[38;5;12m [39m[38;5;12meven[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mmodel[39m[38;5;12m [39m[38;5;12mparameters[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mupdating[39m[38;5;12m [39m[38;5;12mthroughout[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mtraining[39m[38;5;12m [39m[38;5;12mprocess.[39m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12msuggests[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mfeatures[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12minstances[39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mcomputed[39m[38;5;12m [39m[38;5;12mat[39m[38;5;12m [39m[38;5;12mpreceding[39m[38;5;12m [39m[38;5;12miterations[39m[38;5;12m [39m[38;5;12mcan[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12mused[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mconsiderably[39m[38;5;12m [39m[38;5;12mapproximate[39m[38;5;12m [39m[38;5;12mtheir[39m[38;5;12m [39m[38;5;12mfeatures[39m[38;5;12m [39m[38;5;12mextracted[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mcurrent[39m[38;5;12m [39m[38;5;12mmodel.[39m
|
||||
[48;5;235m[38;5;249m[49m[39m
|
||||
|
||||
|
||||
[38;2;255;187;0m[4mDatasets ℹ️[0m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mPractitioners[39m[38;5;12m [39m[38;5;12mcan[39m[38;5;12m [39m[38;5;12muse[39m[38;5;12m [39m[38;5;12many[39m[38;5;12m [39m[38;5;12mlabeled[39m[38;5;12m [39m[38;5;12mor[39m[38;5;12m [39m[38;5;12munlabelled[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mmetric[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12man[39m[38;5;12m [39m[38;5;12mappropriate[39m[38;5;12m [39m[38;5;12mmethod[39m[38;5;12m [39m[38;5;12mchosen.[39m[38;5;12m [39m[38;5;12mHowever,[39m[38;5;12m [39m[38;5;12msome[39m[38;5;12m [39m[38;5;12mdatasets[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mparticularly[39m[38;5;12m [39m[38;5;12mimportant[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mliterature[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mbenchmarking[39m[38;5;12m [39m[38;5;12mor[39m[38;5;12m [39m[38;5;12mother[39m[38;5;12m [39m[38;5;12mways,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mwe[39m[38;5;12m [39m[38;5;12mlist[39m[38;5;12m [39m[38;5;12mthem[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mthis[39m[38;5;12m [39m
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12msection.[39m
|
||||
|
||||
|
||||
[38;5;12m - The Stanford Natural Language Inference Corpus, [39m
|
||||
[38;5;12mserving as a useful benchmark. [39m
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe dataset contains pairs of sentences labeled as [39m[48;5;235m[38;5;249mcontradiction[49m[39m[38;5;12m, [39m[48;5;235m[38;5;249mentailment[49m[39m[38;5;12m, and [39m[48;5;235m[38;5;249mneutral[49m[39m[38;5;12m regarding semantic relationships. Useful to train semantic search models in metric learning.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mModeled on the SNLI corpus, the dataset contains sentence pairs from various genres of spoken and written text, and it also offers a distinctive cross-genre generalization evaluation.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mShared as a part of a Kaggle competition by Google, this dataset is more diverse and thus more interesting than the first version.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe dataset consists of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe dataset is published along with [39m[38;5;14m[1m"Deep Metric Learning via Lifted Structured Feature Embedding"[0m[38;5;12m (https://github.com/rksltnl/Deep-Metric-Learning-CVPR16) paper.[39m
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[38;5;11m[1m▐[0m[38;5;12m [39m[38;5;12mThe dataset is published along with [39m[38;5;14m[1m"The 2021 Image Similarity Dataset and Challenge"[0m[38;5;12m (http://arxiv.org/abs/2106.09672) paper.[39m
|
||||
|
||||
Reference in New Issue
Block a user