hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1de2d38890eeee07c6cdb12d41c98729b349f53f | 594 | md | Markdown | _project/10-easy-decorating-ideas-for-halloween-camping-and-rv-adventures.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/10-easy-decorating-ideas-for-halloween-camping-and-rv-adventures.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/10-easy-decorating-ideas-for-halloween-camping-and-rv-adventures.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | ---
layout: project_single
title: "10 Easy Decorating Ideas For Halloween Camping And RV Adventures!"
slug: "10-easy-decorating-ideas-for-halloween-camping-and-rv-adventures"
parent: "brilliant-tips-for-camping-with-kids"
---
10 Easy Decorating Ideas For Halloween Camping And RV Adventures! Halloween is all about costumes and decorations so when we decided to leave town for a camping weekend over the holiday, we wanted to bring some of the festivities along for the ride. A few quick tricks will turn your campsite, tent or RV into a spooky kid-friendly adventure the adults will love too! | 84.857143 | 367 | 0.79798 | eng_Latn | 0.992641 |
1de3402a9eec3a9aaa190fa254ee51dd2528068c | 7,045 | md | Markdown | README.md | angrybug/Akka.Persistence.MongoDB | 8f41d3cbb6263b2911a98d2529d4dd3dc14fac6d | [
"Apache-2.0"
] | 23 | 2015-06-24T01:31:23.000Z | 2022-01-25T20:15:20.000Z | README.md | angrybug/Akka.Persistence.MongoDB | 8f41d3cbb6263b2911a98d2529d4dd3dc14fac6d | [
"Apache-2.0"
] | 137 | 2015-06-25T09:24:38.000Z | 2022-03-18T22:10:19.000Z | README.md | ptjhuang/Akka.Persistence.MongoDB | 8f41d3cbb6263b2911a98d2529d4dd3dc14fac6d | [
"Apache-2.0"
] | 31 | 2015-06-24T07:06:29.000Z | 2022-01-19T17:49:37.000Z | # Akka.Persistence.MongoDB
Akka Persistence journal and snapshot store backed by MongoDB database.
### Setup
To activate the journal plugin, add the following lines to actor system configuration file:
```
akka.persistence.journal.plugin = "akka.persistence.journal.mongodb"
akka.persistence.journal.mongodb.connection-string = "<database connection string>"
akka.persistence.journal.mongodb.collection = "<journal collection>"
```
Similar configuration may be used to setup a MongoDB snapshot store:
```
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot-store.mongodb"
akka.persistence.snapshot-store.mongodb.connection-string = "<database connection string>"
akka.persistence.snapshot-store.mongodb.collection = "<snapshot-store collection>"
```
Remember that connection string must be provided separately to Journal and Snapshot Store. To finish setup simply initialize plugin using: `MongoDbPersistence.Get(actorSystem);`
### Configuration
Both journal and snapshot store share the same configuration keys (however they resides in separate scopes, so they are definied distinctly for either journal or snapshot store):
```hocon
akka.persistence {
journal {
plugin = "akka.persistence.journal.mongodb"
mongodb {
# qualified type name of the MongoDb persistence journal actor
class = "Akka.Persistence.MongoDb.Journal.MongoDbJournal, Akka.Persistence.MongoDb"
# connection string used for database access
connection-string = ""
# should corresponding journal table's indexes be initialized automatically
auto-initialize = off
# dispatcher used to drive journal actor
plugin-dispatcher = "akka.actor.default-dispatcher"
# MongoDb collection corresponding with persistent journal
collection = "EventJournal"
# metadata collection
metadata-collection = "Metadata"
# For users with legacy data, who want to keep writing data to MongoDb using the original BSON format
# and not the standard binary format introduced in v1.4.0 (see https://github.com/akkadotnet/Akka.Persistence.MongoDB/issues/72)
# enable this setting via `legacy-serialization = on`.
#
# NOTE: this will likely break features such as Akka.Cluster.Sharding, IActorRef serialization, AtLeastOnceDelivery, and more.
legacy-serialization = off
}
}
snapshot-store {
plugin = "akka.persistence.snapshot-store.mongodb"
mongodb {
# qualified type name of the MongoDB persistence snapshot actor
class = "Akka.Persistence.MongoDb.Snapshot.MongoDbSnapshotStore, Akka.Persistence.MongoDb"
# connection string used for database access
connection-string = ""
# should corresponding snapshot's indexes be initialized automatically
auto-initialize = off
# dispatcher used to drive snapshot storage actor
plugin-dispatcher = "akka.actor.default-dispatcher"
# MongoDb collection corresponding with persistent snapshot store
collection = "SnapshotStore"
# For users with legacy data, who want to keep writing data to MongoDb using the original BSON format
# and not the standard binary format introduced in v1.4.0 (see https://github.com/akkadotnet/Akka.Persistence.MongoDB/issues/72)
# enable this setting via `legacy-serialization = on`.
#
# NOTE: this will likely break features such as Akka.Cluster.Sharding, IActorRef serialization, AtLeastOnceDelivery, and more.
legacy-serialization = off
}
}
}
```
### Programmatic configuration
You can programmatically overrides the connection string setting in the HOCON configuration by adding a `MongoDbPersistenceSetup` to the
`ActorSystemSetup` during `ActorSystem` creation. The `MongoDbPersistenceSetup` takes `MongoClientSettings` instances to be used to configure
MongoDB client connection to the server. The `connection-string` settings in the HOCON configuration will be ignored if any of these `MongoClientSettings`
exists inside the Setup object.
> [!NOTE]
> The HOCON configuration is still needed for this to work, only the `connection-string` setting in the configuration will be overriden.
Setting connection override for both snapshot store and journal:
```
// Set snapshotClientSettings or journalClientSettings to null if you do not use them.
var snapshotClientSettings = new MongoClientSettings();
var journalClientSettings = new MongoClientSettings();
// database names are not needed when its client setting is set to null
var snapshotDatabaseName = "theSnapshotDatabase"
var journalDatabaseName = "theJournalDatabase"
var setup = BootstrapSetup.Create()
.WithConfig(myHoconConfig)
.And(new MongoDbPersistenceSetup(snapshotDatabaseName, snapshotClientSettings, journalDatabaseName, journalClientSettings));
var actorSystem = ActorSystem.Create("actorSystem", setup);
```
Setting connection override only for snapshot store:
```
var snapshotClientSettings = new MongoClientSettings();
var snapshotDatabaseName = "theSnapshotDatabase"
var setup = BootstrapSetup.Create()
.WithConfig(myHoconConfig)
.And(new MongoDbPersistenceSetup(snapshotDatabaseName, snapshotClientSettings, null, null));
var actorSystem = ActorSystem.Create("actorSystem", setup);
```
Setting connection override only for journal:
```
var journalClientSettings = new MongoClientSettings();
var journalDatabaseName = "theJournalDatabase"
var setup = BootstrapSetup.Create()
.WithConfig(myHoconConfig)
.And(new MongoDbPersistenceSetup(null, null, journalDatabaseName, journalClientSettings));
var actorSystem = ActorSystem.Create("actorSystem", setup);
```
### Serialization
[Going from v1.4.0 onwards, all events and snapshots are saved as byte arrays using the standard Akka.Persistence format](https://github.com/akkadotnet/Akka.Persistence.MongoDB/issues/72).
However, in the event that you have one of the following use cases:
1. Legacy data all stored in the original BSON / "object" format;
2. A use case where BSON is preferable, i.e. so it can be queried directly via MongoDb queries rather than Akka.Persistence.Query; or
3. A requirement to keep all data in human-readable form.
Then you can disable binary serialization (enabled by default) via the following HOCON:
```
akka.persistence.mongodb{
journal{
legacy-serialization = off
}
snapshot-store{
legacy-serialization = off
}
}
```
Setting `legacy-serialization = on` will allow you to save objects in a BSON format.
**WARNING**: However, `legacy-serialization = on` will break Akka.NET serialization. `IActorRef`s, Akka.Cluster.Sharding, `AtLeastOnceDelivery` actors, and other built-in Akka.NET use cases can't be properly supported using this format. Use it at your own risk.
### Notice
- The MongoDB operator to limit the number of documents in a query only accepts an integer while akka provides a long as maximum for the loading of events during the replay. Internally the long value is cast to an integer and if the value is higher then Int32.MaxValue, Int32.MaxValue is used. So if you have stored more then 2,147,483,647 events for a single PersistenceId, you may have a problem :wink:
| 41.441176 | 404 | 0.781689 | eng_Latn | 0.915632 |
1de38491da6f341eb65511fac827a736c7ee96d5 | 230 | md | Markdown | _includes/sections/workshop/de/testimonials/testimonial3.md | matt-hires/nycmusikmarathon.github.io | d90741d31219cca0dbd1d5ae5ca64c6b980fb0a4 | [
"MIT"
] | 1 | 2021-08-24T18:27:23.000Z | 2021-08-24T18:27:23.000Z | _includes/sections/workshop/de/testimonials/testimonial3.md | schnaufman/nycmm | 71b6c34ed2043093339adc1ab823152ac57429b6 | [
"MIT"
] | 18 | 2020-04-04T15:54:43.000Z | 2021-07-30T07:24:29.000Z | _includes/sections/workshop/de/testimonials/testimonial3.md | schnaufman/nycmm | 71b6c34ed2043093339adc1ab823152ac57429b6 | [
"MIT"
] | 1 | 2021-08-08T19:07:50.000Z | 2021-08-08T19:07:50.000Z | ##### Chanda Rule hat es verstanden uns mit „Power and Love“ zu lenken und Emotionen und Gewicht in ihre Gospellieder zu legen. Ich habe durch sie viel Vertrauen in mein Singen bekommen.
<cite>Sieglinde M., Gospel Workshop</cite>
| 76.666667 | 186 | 0.773913 | deu_Latn | 0.996096 |
1de515b30b3166d0b19bed1ec4378bdf540ab6d7 | 1,132 | md | Markdown | README.md | royalamini8/prototype | d655319989d3dc32eb25a53529247bfcf3b0cc34 | [
"MIT"
] | 2 | 2020-10-29T07:26:35.000Z | 2021-05-10T04:31:35.000Z | README.md | royalamini8/prototype | d655319989d3dc32eb25a53529247bfcf3b0cc34 | [
"MIT"
] | 8 | 2020-11-04T08:48:25.000Z | 2021-01-31T15:36:22.000Z | README.md | royalamini8/prototype | d655319989d3dc32eb25a53529247bfcf3b0cc34 | [
"MIT"
] | null | null | null | # 📅 UTM Timetable Generator 📅
Manual Timetable Generator specifically for UTM students. However, students from other universities are able to use this timetable generator if applicable.
## Features:
* Manually add courses and sessions. Highly flexible
* Ability to save timetable and its customization settings. Auto load last save whenever you come back
* Clash checking. Notifies user when there is a clash in the timetable
* Stylized and customizable timetable. Set themes, font-sizes, grid sizes, number of columns and so on!
* Downloadable timetable in png format
<br>
---
<br>
# 🎓 Tutorial 🎓
![MainInterface](./resources/main.png)
If you have a single time slot that spans over 1 hour, simply stack them as follows:
![Stacked Time Slot 1](./resources/Consecutive1.png)
The timetable drawing program will be able to detect consecutive timeslots, and draw them merged
![Stacked Time Slot 2](./resources/Consecutive2.png)
<br>
<br>
---
Created using React, Redux, SCSS for styling, react-contenteditable (Library for ContentEditable elements in React),
and Konva (HTML5 Canvas Library) for drawing the timetable
| 29.025641 | 155 | 0.771201 | eng_Latn | 0.972393 |
1de62f1f366d07f55484340dcadadec46c12265f | 54,800 | md | Markdown | releases/release-1.14/release-notes-draft.md | daminisatya/sig-release | fc63ade08b6cbd42fa4d069bcea0ff97c54c7bd0 | [
"Apache-2.0"
] | 350 | 2017-07-24T17:03:06.000Z | 2022-03-27T21:02:42.000Z | releases/release-1.14/release-notes-draft.md | daminisatya/sig-release | fc63ade08b6cbd42fa4d069bcea0ff97c54c7bd0 | [
"Apache-2.0"
] | 1,834 | 2017-07-20T19:30:54.000Z | 2022-03-31T13:49:19.000Z | releases/release-1.14/release-notes-draft.md | daminisatya/sig-release | fc63ade08b6cbd42fa4d069bcea0ff97c54c7bd0 | [
"Apache-2.0"
] | 351 | 2017-07-21T22:42:56.000Z | 2022-03-27T21:02:41.000Z | # Kubernetes v1.14 Release Notes
## 1.14 What’s New
Support for Windows Nodes is Graduating to Stable ([#116](https://github.com/kubernetes/enhancements/issues/116) )
- Support for Windows Server 2019 for worker nodes and containers
- Support for out of tree networking with Azure-CNI, OVN-Kubernetes and Flannel
- Improved support for pods, service types, workload controllers and metrics/quotas to closely match the capabilities offered for Linux containers
kubernetes/enhancements: [#116](https://github.com/kubernetes/enhancements/issues/116) [[kep](https://github.com/kubernetes/enhancements/blob/master/keps/sig-windows/20190103-windows-node-support.md)]
Updated Plugin Mechanism for kubectl is Graduating to Stable ([#579](https://github.com/kubernetes/enhancements/issues/579))
- Extends functionality to kubectl to support extensions adding new commands as well as overriding specific subcommands (at any depth).
- Documentation fixes
kubernetes/enhancements: [#579](https://github.com/kubernetes/enhancements/issues/579) [[kep](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/0024-kubectl-plugins.md#summary)]
Durable Local Storage Management is Now GA ([#121](https://github.com/kubernetes/enhancements/issues/121#issuecomment-457396290))
- Makes locally attached (non-network attached) storage available as a persistent volume source.
- Allows users to take advantage of the typically cheaper and improved performance of persistent local storage
kubernetes/kubernetes: [#73525](https://github.com/kubernetes/kubernetes/pull/73525), [#74391](https://github.com/kubernetes/kubernetes/pull/74391), [#74769](http://github.com/kubernetes/kubernetes/pull/74769)
kubernetes/enhancements: [#121](https://github.com/kubernetes/enhancements/issues/121#issuecomment-457396290) [[kep](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20190124-local-persistent-volumes.md)]
Pid Limiting is Graduating to Beta ([#757](https://github.com/kubernetes/enhancements/issues/757))
- Prevents a pod from starving pid resource
- Ability to isolate pid resources pod-to-pod and node-to-pod
kubernetes/kubernetes: [#73651](http://github.com/kubernetes/kubernetes/pull/73651)
kubernetes/enhancements: [#757](https://github.com/kubernetes/enhancements/issues/757) [[kep](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190129-pid-limiting.md)]
Pod Priority and Preemption in Kubernetes ([#564](https://github.com/kubernetes/enhancements/issues/564))
- Pod priority and preemption enables Kubernetes scheduler to schedule more important Pods first and when cluster is out of resources, it removes less important pods to create room for more important ones. The importance is specified by priority.
kubernetes/kubernetes: [#73498](https://github.com/kubernetes/kubernetes/pull/73498), [#73555](https://github.com/kubernetes/kubernetes/pull/73555), [#74465](https://github.com/kubernetes/kubernetes/pull/74465)
kubernetes/enhancements: [#564](https://github.com/kubernetes/enhancements/issues/564) [[kep](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190131-pod-priority-preemption.md)]
Pod Ready++ ([#580](https://github.com/kubernetes/enhancements/issues/580))
- Introduces extension point for external feedback on pod readiness.
kubernetes/kubernetes: [#74434](http://github.com/kubernetes/kubernetes/pull/74434),
kubernetes/enhancements: [#580](https://github.com/kubernetes/enhancements/issues/580) [[kep](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md)]
Kubeadm: Automate certificate copy between control planes in HA setups
- Joining control plane nodes to a HA cluster can now be simplified by enabling the optional automatic copy of certificates from an existing control plane node.
- You can now use `kubeadm init --experimental-upload-certs` and `kubeadm join --experimental-control-plane --certificate-key`.
kubernetes/kubeadm: [#1373](https://github.com/kubernetes/kubeadm/issues/1373)
kubernetes/enhancements: [#357](https://github.com/kubernetes/enhancements/issues/357) [[kep](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/20190122-Certificates-copy-for-kubeadm-join--control-plane.md)]
Kubeadm: Expose the `kubeadm join` workflow as phases
- The `kubeadm join` command can now be used in phases. Similar to the work that was done for `kubeadm init` in 1.13, in 1.14 the `join` phases can be now executed step-by-step/selectively using the `kubeadm join phase` sub-command. This makes it possible to further customize the workflow of joining nodes to the cluster.
kubernetes/kubeadm: [#1204](https://github.com/kubernetes/kubeadm/issues/1204)
kubernetes/enhancements: [kep](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/0029-20180918-kubeadm-phases-beta.md)
## Known Issues
- There is a known issue [coredns/coredns#2629](https://github.com/coredns/coredns/issues/2629) in CoreDNS 1.3.1, wherein if the Kubernetes API shuts down while CoreDNS is connected, CoreDNS will crash. The issue is fixed in CoreDNS 1.4.0 in [coredns/coredns#2529](https://github.com/coredns/coredns/pull/2529).
- Kubelet might fail to restart if an existing flexvolume mounted pvc contains a large number of directories, or is full. [#75019](https://github.com/kubernetes/kubernetes/pull/75019)
## Urgent Upgrade Notes
### (No, really, you MUST read this before you upgrade)
- kube-apiserver:
- Default RBAC policy no longer grants access to discovery and permission-checking APIs (used by `kubectl auth can-i`) to *unauthenticated* users. Upgraded clusters preserve prior behavior, but cluster administrators wishing to grant unauthenticated users access in new clusters will need to explicitly opt-in to expose the discovery and/or permission-checking APIs:
- `kubectl create clusterrolebinding anonymous-discovery --clusterrole=system:discovery --group=system:unauthenticated`
- `kubectl create clusterrolebinding anonymous-access-review --clusterrole=system:basic-user --group=system:unauthenticated`
- The deprecated --storage-versions flag has been removed. The storage versions will always be the default value built-in the kube-apiserver binary. ([#67678](https://github.com/kubernetes/kubernetes/pull/67678), [@caesarxuchao](https://github.com/caesarxuchao))
- The deprecated `--repair-malformed-updates` flag has been removed ([#73663](https://github.com/kubernetes/kubernetes/pull/73663), [@danielqsj](https://github.com/danielqsj))
- The `/swaggerapi/*` schema docs, deprecated since 1.7, have been removed in favor of the /openapi/v2 schema docs. ([#72924](https://github.com/kubernetes/kubernetes/pull/72924), [@liggitt](https://github.com/liggitt))
- The /swagger.json and /swagger-2.0.0.pb-v1 schema documents, deprecated since v1.10, have been removed in favor of `/openapi/v2` ([#73148](https://github.com/kubernetes/kubernetes/pull/73148), [@liggitt](https://github.com/liggitt))
- `kube-apiserver` now only aggregates openapi schemas from `/openapi/v2` endpoints of aggregated API servers. The fallback to aggregate from `/swagger.json` has been removed. Ensure aggregated API servers provide schema information via `/openapi/v2` (available since v1.10). ([#73441](https://github.com/kubernetes/kubernetes/pull/73441), [@roycaihw](https://github.com/roycaihw))
- The OpenAPI definitions with the prefix "io.k8s.kubernetes.pkg" (deprecated since 1.9) have been removed. ([#74596](https://github.com/kubernetes/kubernetes/pull/74596), [@sttts](https://github.com/sttts))
- The `ValidateProxyRedirects` feature was promoted to Beta and enabled by default. This feature restricts redirect-following from the apiserver to same-host redirects. If nodes are configured to respond to CRI streaming requests on a different host interface than what the apiserver makes requests on (only the case if not using the built-in dockershim & setting the kubelet flag `--redirect-container-streaming=true`), then these requests will be broken. In that case, the feature can be temporarily disabled until the node configuration is corrected. We suggest setting `--redirect-container-streaming=false` on the kubelet to avoid issues.([#72552](https://github.com/kubernetes/kubernetes/pull/72552), [@tallclair](https://github.com/tallclair))
- kubectl
- The deprecated `--show-all` flag to `kubectl get` has been removed ([#69255](https://github.com/kubernetes/kubernetes/pull/69255), [@Pingan2017](https://github.com/Pingan2017))
- kubelet
- The deprecated `--experimental-fail-swap-on` flag has been removed ([#69552](https://github.com/kubernetes/kubernetes/pull/69552), [@Pingan2017](https://github.com/Pingan2017))
- Health check (liveness & readiness) probes using an HTTPGetAction will no longer follow redirects to different hostnames from the original probe request. Instead, these non-local redirects will be treated as a Success (the documented behavior). In this case an event with reason "ProbeWarning" will be generated, indicating that the redirect was ignored. If you were previously relying on the redirect to run health checks against different endpoints, you will need to perform the healthcheck logic outside the Kubelet, for instance by proxying the external endpoint rather than redirecting to it. ([#75416](https://github.com/kubernetes/kubernetes/pull/75416), [@tallclair](https://github.com/tallclair))
- client-go
- The deprecated versionless API group accessors (like `clientset.Apps()`) have been removed. Use an explicit version instead (like `clientset.AppsV1()`) ([#74422](https://github.com/kubernetes/kubernetes/pull/74422), [@liggitt](https://github.com/liggitt))
- The disk-cached discovery client is moved from k8s.io/client-go/discovery to k8s.io/client-go/discovery/cached/disk.
The memory-cached discovery client is moved from k8s.io/client-go/discovery/cached to k8s.io/client-go/discovery/cached/memory.
([#72214](https://github.com/kubernetes/kubernetes/pull/72214), [@caesarxuchao](https://github.com/caesarxuchao))
- kubeadm
- `kubeadm alpha preflight` and `kubeadm alpha preflight node` are removed; you can now use `kubeadm join phase preflight` ([#73718](https://github.com/kubernetes/kubernetes/pull/73718), [@fabriziopandini](https://github.com/fabriziopandini))
- The deprecated taints `node.alpha.kubernetes.io/notReady` and `node.alpha.kubernetes.io/unreachable` are no longer supported or adjusted. These uses should be replaced with `node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable`
([#73001](https://github.com/kubernetes/kubernetes/pull/73001), [@shivnagarajan](https://github.com/shivnagarajan))
- Any Prometheus queries that match `pod_name` and `container_name` labels (e.g. cadvisor or kubelet probe metrics) should be updated to use `pod` and `container` instead. `pod_name` and `container_name` labels will be present alongside `pod` and `container` labels for one transitional release and removed in the future.
([#69099](https://github.com/kubernetes/kubernetes/pull/69099), [@ehashman](https://github.com/ehashman))
## Deprecations
- kubectl
- `kubectl convert` is deprecated and will be removed in v1.17.
- The `--export` flag for the `kubectl get` command is deprecated and will be removed in v1.18. ([#73787](https://github.com/kubernetes/kubernetes/pull/73787), [@soltysh](https://github.com/soltysh))
- kubelet
- OS and Arch information is now recorded in `kubernetes.io/os` and `kubernetes.io/arch` labels on Node objects. The previous labels (`beta.kubernetes.io/os` and `beta.kubernetes.io/arch`) are still recorded, but are deprecated and targeted for removal in v1.18. ([#73333](https://github.com/kubernetes/kubernetes/pull/73333), [@yujuhong](https://github.com/yujuhong))
- The `--containerized` flag is deprecated and will be removed in a future release ([#74267](https://github.com/kubernetes/kubernetes/pull/74267), [@dims](https://github.com/dims))
- hyperkube
- The `--make-symlinks` flag is deprecated and will be removed in a future release. ([#74975](https://github.com/kubernetes/kubernetes/pull/74975), [@dims](https://github.com/dims))
- API
- Ingress resources are now available via `networking.k8s.io/v1beta1`. Ingress resources in `extensions/v1beta1` are deprecated and will no longer be served in v1.18. Existing persisted data is available via the new API group/version ([#74057](https://github.com/kubernetes/kubernetes/pull/74057), [@liggitt](https://github.com/liggitt))
- NetworkPolicy resources will no longer be served from `extensions/v1beta1` in v1.16. Migrate use to the `networking.k8s.io/v1` API, available since v1.8. Existing persisted data can be retrieved via the `networking.k8s.io/v1` API.
- PodSecurityPolicy resources will no longer be served from `extensions/v1beta1` in v1.16. Migrate to the `policy/v1beta1` API, available since v1.10. Existing persisted data can be retrieved via the `policy/v1beta1` API.
- DaemonSet, Deployment, and ReplicaSet resources will no longer be served from `extensions/v1beta1`, `apps/v1beta1`, or `apps/v1beta2` in v1.16. Migrate to the `apps/v1` API, available since v1.9. Existing persisted data can be retrieved via the `apps/v1` API.
- PriorityClass resources have been promoted to `scheduling.k8s.io/v1` with no changes. The `scheduling.k8s.io/v1beta1` and `scheduling.k8s.io/v1alpha1` versions are now deprecated and will stop being served by default in v1.17. ([#73555](https://github.com/kubernetes/kubernetes/pull/73555), [#74465](https://github.com/kubernetes/kubernetes/pull/74465), [@bsalamat](https://github.com/bsalamat))
- The `export` query parameter for list API calls is deprecated and will be removed in v1.18 ([#73783](https://github.com/kubernetes/kubernetes/pull/73783), [@deads2k](https://github.com/deads2k))
- The following features are now GA, and the associated feature gates are deprecated and will be removed in v1.15:
- `CustomPodDNS`
- `HugePages`
- `MountPropagation`
- `PersistentLocalVolumes`
- CoreDNS: The following directives or keywords are deprecated and will be removed in v1.15:
- `upstream` option of `kubernetes` plugin, becoming default behavior in v1.15.
- `proxy` plugin replaced by `forward` plugin
## Removed and deprecated metrics
### Removed metrics
- `reflector_items_per_list`
- `reflector_items_per_watch`
- `reflector_last_resource_version`
- `reflector_list_duration_seconds`
- `reflector_lists_total`
- `reflector_short_watches_total`
- `reflector_watch_duration_seconds`
- `reflector_watches_total`
### Deprecated metrics
- `rest_client_request_latency_seconds` -> `rest_client_request_duration_seconds`
- `apiserver_proxy_tunnel_sync_latency_secs` -> `apiserver_proxy_tunnel_sync_duration_seconds`
- `scheduler_scheduling_latency_seconds` -> `scheduler_scheduling_duration_seconds`
- `kubelet_pod_worker_latency_microseconds` -> `kubelet_pod_worker_duration_seconds`
- `kubelet_pod_start_latency_microseconds` -> `kubelet_pod_start_duration_seconds`
- `kubelet_cgroup_manager_latency_microseconds` -> `kubelet_cgroup_manager_duration_seconds`
- `kubelet_pod_worker_start_latency_microseconds` -> `kubelet_pod_worker_start_duration_seconds`
- `kubelet_pleg_relist_latency_microseconds` -> `kubelet_pleg_relist_duration_seconds`
- `kubelet_pleg_relist_interval_microseconds` -> `kubelet_pleg_relist_interval_seconds`
- `kubelet_eviction_stats_age_microseconds` -> `kubelet_eviction_stats_age_seconds`
- `kubelet_runtime_operations` -> `kubelet_runtime_operations_total`
- `kubelet_runtime_operations_latency_microseconds` -> `kubelet_runtime_operations_duration_seconds`
- `kubelet_runtime_operations_errors` -> `kubelet_runtime_operations_errors_total`
- `kubelet_device_plugin_registration_count` -> `kubelet_device_plugin_registration_total`
- `kubelet_device_plugin_alloc_latency_microseconds` -> `kubelet_device_plugin_alloc_duration_seconds`
- `docker_operations` -> `docker_operations_total`
- `docker_operations_latency_microseconds` -> `docker_operations_latency_seconds`
- `docker_operations_errors` -> `docker_operations_errors_total`
- `docker_operations_timeout` -> `docker_operations_timeout_total`
- `network_plugin_operations_latency_microseconds` -> `network_plugin_operations_latency_seconds`
- `sync_proxy_rules_latency_microseconds` -> `sync_proxy_rules_latency_seconds`
- `apiserver_request_count` -> `apiserver_request_total`
- `apiserver_request_latencies` -> `apiserver_request_latency_seconds`
- `apiserver_request_latencies_summary` -> `apiserver_request_latency_seconds`
- `apiserver_dropped_requests` -> `apiserver_dropped_requests_total`
- `etcd_helper_cache_hit_count` -> `etcd_helper_cache_hit_total`
- `etcd_helper_cache_miss_count` -> `etcd_helper_cache_miss_total`
- `etcd_helper_cache_entry_count` -> `etcd_helper_cache_entry_total`
- `etcd_request_cache_get_latencies_summary` -> `etcd_request_cache_get_latency_seconds`
- `etcd_request_cache_add_latencies_summary` -> `etcd_request_cache_add_latency_seconds`
- `etcd_request_latencies_summary` -> `etcd_request_latency_seconds`
- `transformation_latencies_microseconds` -> `transformation_latencies_seconds`
- `data_key_generation_latencies_microseconds` -> `data_key_generation_latencies_seconds`
## Notable Features
- Increased the histogram resolution of the API server client certificate to accommodate short-lived (< 6h) client certificates. ([#74806](https://github.com/kubernetes/kubernetes/pull/74806), [@mxinden](https://github.com/mxinden))
- Updated to use golang 1.12 ([#74632](https://github.com/kubernetes/kubernetes/pull/74632), [@cblecker](https://github.com/cblecker))
- The `RunAsGroup` feature has been promoted to beta and enabled by default. `PodSpec` and `PodSecurityPolicy` objects can be used to control the primary GID of containers on supported container runtimes. ([#73007](https://github.com/kubernetes/kubernetes/pull/73007), [@krmayankk](https://github.com/krmayankk))
- Added the same information to an init container as a standard container in a pod when using `PodPresets`. ([#71479](https://github.com/kubernetes/kubernetes/pull/71479), [@soggiest](https://github.com/soggiest))
- kube-conformance image will now run ginkgo with the `--dryRun` flag if the container is run with the environment variable E2E_DRYRUN set. ([#74731](https://github.com/kubernetes/kubernetes/pull/74731), [@johnSchnake](https://github.com/johnSchnake))
- Introduced dynamic volume provisioning shim for CSI migration ([#73653](https://github.com/kubernetes/kubernetes/pull/73653), [@ddebroy](https://github.com/ddebroy))
- Applied resources from a directory containing kustomization.yaml ([#74140](https://github.com/kubernetes/kubernetes/pull/74140), [@Liujingfang1](https://github.com/Liujingfang1))
- kubeadm: Allowed to download certificate secrets uploaded by `init` or `upload-certs` phase, allowing to transfer certificate secrets (certificates and keys) from the cluster to other master machines when creating HA deployments. ([#74168](https://github.com/kubernetes/kubernetes/pull/74168), [@ereslibre](https://github.com/ereslibre))
- The `--quiet` option to `kubectl run` now suppresses resource deletion messages emitted when the `--rm` option is specified. ([#73266](https://github.com/kubernetes/kubernetes/pull/73266), [@awh](https://github.com/awh))
- Added Custom Resource support to `kubectl autoscale` ([#72678](https://github.com/kubernetes/kubernetes/pull/72678), [@rmohr](https://github.com/rmohr))
- Cinder volume limit can now be configured from node too ([#74542](https://github.com/kubernetes/kubernetes/pull/74542), [@gnufied](https://github.com/gnufied))
- It is now possible to combine the `-f` and `-l` flags in `kubectl logs` ([#67573](https://github.com/kubernetes/kubernetes/pull/67573), [@m1kola](https://github.com/m1kola))
- New conformance tests added for API Aggregation. ([#63947](https://github.com/kubernetes/kubernetes/pull/63947), [@jennybuckley](https://github.com/jennybuckley))
- Moved fluentd-elasticsearch addon images to community controlled location ([#73819](https://github.com/kubernetes/kubernetes/pull/73819), [@coffeepac](https://github.com/coffeepac))
- Removed local etcd members from the etcd cluster when `kubeadm reset` ([#74112](https://github.com/kubernetes/kubernetes/pull/74112), [@pytimer](https://github.com/pytimer))
- kubeadm will now not fail preflight checks when running on >= 5.0 Linux kernel ([#74355](https://github.com/kubernetes/kubernetes/pull/74355), [@brb](https://github.com/brb))
- Scheduler cache snapshot algorithm has been optimized to improve scheduling throughput. ([#74041](https://github.com/kubernetes/kubernetes/pull/74041), [@bsalamat](https://github.com/bsalamat))
- It is now possible to upload certificates required to join a new control-plane to kubeadm-certs secret using the flag `--experimental-upload-certs` on `init` or upload-certs phase. ([#73907](https://github.com/kubernetes/kubernetes/pull/73907), [@yagonobre](https://github.com/yagonobre))
[@RobertKrawitz](https://github.com/RobertKrawitz))
- `kubectl auth reconcile` now outputs details about what changes are being made ([#71564](https://github.com/kubernetes/kubernetes/pull/71564), [@liggitt](https://github.com/liggitt))
- Added Kustomize as a subcommand in kubectl ([#73033](https://github.com/kubernetes/kubernetes/pull/73033), [@Liujingfang1](https://github.com/Liujingfang1))
- Added `kubelet_node_name` metrics. ([#72910](https://github.com/kubernetes/kubernetes/pull/72910), [@danielqsj](https://github.com/danielqsj))
- Updated AWS SDK to v1.16.26 for ECR PrivateLink support ([#73435](https://github.com/kubernetes/kubernetes/pull/73435), [@micahhausler](https://github.com/micahhausler))
- Expanded `kubectl wait` to work with more types of selectors. ([#71746](https://github.com/kubernetes/kubernetes/pull/71746), [@rctl](https://github.com/rctl))
([#72832](https://github.com/kubernetes/kubernetes/pull/72832), [@MrHohn](https://github.com/MrHohn))
- Added configuration for AWS endpoint fine control: ([#72245](https://github.com/kubernetes/kubernetes/pull/72245), [@ampsingram](https://github.com/ampsingram))
- The CoreDNS configuration now has the forward plugin for proxy in the default configuration instead of the proxy plugin. ([#73267](https://github.com/kubernetes/kubernetes/pull/73267), [@rajansandeep](https://github.com/rajansandeep))
- Added alpha field storageVersionHash to the discovery document for each resource. Its value must be treated as opaque by clients. Only equality comparison on the value is valid. ([#73191](https://github.com/kubernetes/kubernetes/pull/73191), [@caesarxuchao](https://github.com/caesarxuchao))
- If you are running the cloud-controller-manager and you have the `pvlabel.kubernetes.io` alpha Initializer enabled, you must now enable PersistentVolume labeling using the `PersistentVolumeLabel` admission controller instead. You can do this by adding `PersistentVolumeLabel` in the `--enable-admission-plugins` kube-apiserver flag. ([#73102](https://github.com/kubernetes/kubernetes/pull/73102), [@andrewsykim](https://github.com/andrewsykim))
- kubectl supports copying files with wild card ([#72641](https://github.com/kubernetes/kubernetes/pull/72641), [@dixudx](https://github.com/dixudx))
- kubeadm now attempts to detect an installed CRI by its usual domain socket, so that `--cri-socket` can be omitted from the command line if Docker is not used and there is a single CRI installed. ([#69366](https://github.com/kubernetes/kubernetes/pull/69366), [@rosti](https://github.com/rosti))
- `CSINodeInfo` and `CSIDriver` CRDs have been installed in the local cluster. ([#72584](https://github.com/kubernetes/kubernetes/pull/72584), [@xing-yang](https://github.com/xing-yang))
- Node OS/arch labels have been promoted to GA ([#73048](https://github.com/kubernetes/kubernetes/pull/73048), [@yujuhong](https://github.com/yujuhong))
- Added support for max attach limit for Cinder ([#72980](https://github.com/kubernetes/kubernetes/pull/72980), [@gnufied](https://github.com/gnufied))
- Enabled mTLS encryption between etcd and kube-apiserver in GCE ([#70144](https://github.com/kubernetes/kubernetes/pull/70144), [@wenjiaswe](https://github.com/wenjiaswe))
- Added `ResourceVersion` as a precondition for delete in order to ensure a delete fails if an unobserved change happens to an object. ([#74040](https://github.com/kubernetes/kubernetes/pull/74040), [@ajatprabha](https://github.com/ajatprabha))
- There is now support for collecting pod logs under `/var/log/pods/NAMESPACE_NAME_UID` to stackdriver with `k8s_pod` resource type. ([#74502](https://github.com/kubernetes/kubernetes/pull/74502), [@Random-Liu](https://github.com/Random-Liu))
- Changed CRI pod log directory from `/var/log/pods/UID` to `/var/log/pods/NAMESPACE_NAME_UID`. ([#74441](https://github.com/kubernetes/kubernetes/pull/74441), [@Random-Liu](https://github.com/Random-Liu))
- `RuntimeClass` has been promoted to beta, and is enabled by default. ([#75003](https://github.com/kubernetes/kubernetes/pull/75003), [@tallclair](https://github.com/tallclair))
- New "dry_run" metric label (indicating the value of the dryRun query parameter) has been added into the metrics:
* apiserver_request_total
* apiserver_request_duration_seconds
New "APPLY" value for the "verb" metric label which indicates a PATCH with "Content-Type: apply-patch+yaml". This value is experimental and will only be present if the ServerSideApply alpha feature is enabled. ([#74997](https://github.com/kubernetes/kubernetes/pull/74997), [@jennybuckley](https://github.com/jennybuckley))
- GCE: bumped COS image version to `cos-beta-73-11647-64-0` ([#75149](https://github.com/kubernetes/kubernetes/pull/75149), [@yguo0905](https://github.com/yguo0905))
- Added alpha support for ephemeral CSI inline volumes that are embedded in pod specs. ([#74086](https://github.com/kubernetes/kubernetes/pull/74086), [@vladimirvivien](https://github.com/vladimirvivien))
## API Changes
- [CRI] Added a new field called `runtime_handler` into `PodSandbox` and `PodSandboxStatus` to track the `RuntimeClass` information of a pod. ([#73833](https://github.com/kubernetes/kubernetes/pull/73833), [@haiyanmeng](https://github.com/haiyanmeng))
## Detailed Bug Fixes And Changes
### API Machinery
- client-go: `PortForwarder.GetPorts()` now contain correct local port if no local port was initially specified when setting up the port forwarder ([#73676](https://github.com/kubernetes/kubernetes/pull/73676), [@martin-helmich](https://github.com/martin-helmich))
- Fixed an issue with missing `apiVersion/kind` in object data sent to admission webhooks ([#74448](https://github.com/kubernetes/kubernetes/pull/74448), [@liggitt](https://github.com/liggitt))
- Prometheus metrics for `crd_autoregister`, `crd_finalizer` and `crd_naming_condition_controller` are exported. ([#71767](https://github.com/kubernetes/kubernetes/pull/71767), [@roycaihw](https://github.com/roycaihw))
- Fixed admission metrics in seconds. ([#72343](https://github.com/kubernetes/kubernetes/pull/72343), [@danielqsj](https://github.com/danielqsj))
- When a watch is closed by an HTTP2 load balancer and we are told to go away, skip printing the message to stderr by default.
- Spedup kubectl by >10 when calling out to kube-apiserver for discovery information. ([#73345](https://github.com/kubernetes/kubernetes/pull/73345), [@sttts](https://github.com/sttts))
- Fixed watch to not send the same set of events multiple times causing watcher to go back in time ([#73845](https://github.com/kubernetes/kubernetes/pull/73845), [@wojtek-t](https://github.com/wojtek-t))
([#73277](https://github.com/kubernetes/kubernetes/pull/73277), [@smarterclayton](https://github.com/smarterclayton))
- Fix kube-apiserver not to create default/kubernetes service endpoints before it reports readiness via the /healthz and therefore is ready to serve requests. Also early during startup old endpoints are remove which might be left over from a previously crashed kube-apiserver. ([#74668](https://github.com/kubernetes/kubernetes/pull/74668), [@sttts](https://github.com/sttts))
- Add a configuration field to shorten the timeout of validating/mutating admission webhook call. The timeout value must be between 1 and 30 seconds. Default to 30 seconds when unspecified. ([#74562](https://github.com/kubernetes/kubernetes/pull/74562), [@roycaihw](https://github.com/roycaihw))
- The apiserver, including both the kube-apiserver and apiservers built with the generic apiserver library, will now return 413 RequestEntityTooLarge error if a json patch contains more than 10,000 operations. ([#74000](https://github.com/kubernetes/kubernetes/pull/74000), [@caesarxuchao](https://github.com/caesarxuchao))
- Fixed an error processing watch events when running skewed apiservers ([#73482](https://github.com/kubernetes/kubernetes/pull/73482), [@liggitt](https://github.com/liggitt))
- jsonpath expressions containing `[start:end:step]` slice are now evaluated correctly ([#73149](https://github.com/kubernetes/kubernetes/pull/73149), [@liggitt](https://github.com/liggitt))
- `metadata.deletionTimestamp` is no longer moved into the future when issuing repeated DELETE requests against a resource containing a finalizer. ([#73138](https://github.com/kubernetes/kubernetes/pull/73138), [@liggitt](https://github.com/liggitt))
- Fixed kube-apiserver not to create default/kubernetes service endpoints before it reports readiness via the /healthz and therefore is ready to serve requests. Also early during startup old endpoints are remove which might be left over from a previously crashed kube-apiserver. ([#74668](https://github.com/kubernetes/kubernetes/pull/74668), [@sttts](https://github.com/sttts))
- `watch.Until` now works for long durations. ([#67350](https://github.com/kubernetes/kubernetes/pull/67350), [@tnozicka](https://github.com/tnozicka))
- Added duration metric for CRD webhook converters. ([#74376](https://github.com/kubernetes/kubernetes/pull/74376), [@mbohlool](https://github.com/mbohlool))
- Fixed keymutex issues which may crash in some platforms. ([#74348](https://github.com/kubernetes/kubernetes/pull/74348), [@danielqsj](https://github.com/danielqsj))
- Considerably reduced the CPU load in kube-apiserver while aggregating OpenAPI specifications from aggregated API servers. ([#71223](https://github.com/kubernetes/kubernetes/pull/71223), [@sttts](https://github.com/sttts))
- Fixed graceful apiserver shutdown to not drop outgoing bytes before the process terminates. ([#72970](https://github.com/kubernetes/kubernetes/pull/72970), [@sttts](https://github.com/sttts))
### Apps
- Added deleting pods created by `DaemonSet` assigned to not existing nodes. ([#73401](https://github.com/kubernetes/kubernetes/pull/73401), [@krzysztof-jastrzebski](https://github.com/krzysztof-jastrzebski))
- Pod eviction now honors graceful deletion by default if no delete options are provided in the eviction request. ([#72730](https://github.com/kubernetes/kubernetes/pull/72730), [@liggitt](https://github.com/liggitt))
### Auth
- Added `kubectl auth can-i --list` option, which allows users to know what actions they can do in specific namespaces. ([#64820](https://github.com/kubernetes/kubernetes/pull/64820), [@WanLinghao](https://github.com/WanLinghao))
- The `rules` field in RBAC `Role` and `ClusterRole` objects is now correctly reported as optional in the openapi schema. ([#73250](https://github.com/kubernetes/kubernetes/pull/73250), [@liggitt](https://github.com/liggitt))
- `system:kube-controller-manager` and `system:kube-scheduler` users are now permitted to perform delegated authentication/authorization checks by default RBAC policy ([#72491](https://github.com/kubernetes/kubernetes/pull/72491), [@liggitt](https://github.com/liggitt))
- Error messages returned in authentication webhook status responses are now correctly included in the apiserver log ([#73595](https://github.com/kubernetes/kubernetes/pull/73595), [@liggitt](https://github.com/liggitt))
- Fixed use of webhook admission plugins with multi-version custom resources ([#74154](https://github.com/kubernetes/kubernetes/pull/74154), [@mbohlool](https://github.com/mbohlool))
### AWS
- Prevented AWS Network Load Balancer security groups ingress rules to be deleted by ensuring target groups are tagged. ([#73594](https://github.com/kubernetes/kubernetes/pull/73594), [@masterzen](https://github.com/masterzen))
- AWS ELB health checks will now use HTTPS/SSL protocol for HTTPS/SSL backends. ([#70309](https://github.com/kubernetes/kubernetes/pull/70309), [@2rs2ts](https://github.com/2rs2ts))
### Azure
- Fixed failure to detach Azure disk when there is server side error ([#74398](https://github.com/kubernetes/kubernetes/pull/74398), [@andyzhangx](https://github.com/andyzhangx))
- Fixed subnet annotation checking for Azure internal loadbalancer ([#74498](https://github.com/kubernetes/kubernetes/pull/74498), [@feiskyer](https://github.com/feiskyer))
- Fixed mixed protocol issue for Azure load balancer ([#74200](https://github.com/kubernetes/kubernetes/pull/74200), [@andyzhangx](https://github.com/andyzhangx))
- Fixed Azure accounts timeout issue when there is no out-bound IP ([#74191](https://github.com/kubernetes/kubernetes/pull/74191), [@andyzhangx](https://github.com/andyzhangx))
- Fixed Azure Container Registry anonymous repo image pull error ([#74715](https://github.com/kubernetes/kubernetes/pull/74715), [@andyzhangx](https://github.com/andyzhangx))
- Fixed parse devicePath issue on Azure Disk ([#74499](https://github.com/kubernetes/kubernetes/pull/74499), [@andyzhangx](https://github.com/andyzhangx))
### CLI
- Fixed `--help` flag parsing ([#74682](https://github.com/kubernetes/kubernetes/pull/74682), [@soltysh](https://github.com/soltysh))
- Fixed a bug where `kubectl describe` cannot obtain the event messages for a static pod ([#74156](https://github.com/kubernetes/kubernetes/pull/74156), [@gaorong](https://github.com/gaorong))
- Fixed panic when performing a `set env` operation on a `--local` resource ([#65636](https://github.com/kubernetes/kubernetes/pull/65636), [@juanvallejo](https://github.com/juanvallejo))
- Missing directories listed in a user's PATH are no longer considered errors and are instead logged by the `kubectl plugin list` command when listing available plugins. ([#73542](https://github.com/kubernetes/kubernetes/pull/73542), [@juanvallejo](https://github.com/juanvallejo))
- Now users can get object info like:
```bash
a. kubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0:3].name
b. kubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[-2:].name
```
([#73063](https://github.com/kubernetes/kubernetes/pull/73063), [@WanLinghao](https://github.com/WanLinghao))
- The `kubectl api-resources` command will no longer fail to display any resources on a single failure ([#73035](https://github.com/kubernetes/kubernetes/pull/73035), [@juanvallejo](https://github.com/juanvallejo))
- kubectl now loads config file once and uses persistent client config ([#71117](https://github.com/kubernetes/kubernetes/pull/71117), [@dixudx](https://github.com/dixudx))
- Printed `SizeLimit` of `EmptyDir` in `kubectl describe pod` outputs. ([#69279](https://github.com/kubernetes/kubernetes/pull/69279), [@dtaniwaki](https://github.com/dtaniwaki))
- `kubectl delete --all-namespaces` is now a recognized flag. ([#73716](https://github.com/kubernetes/kubernetes/pull/73716), [@deads2k](https://github.com/deads2k))
### Cloud Provider
- Fixed a bug that caused PV allocation on non-English vSphere installations to fail ([#73115](https://github.com/kubernetes/kubernetes/pull/73115), [@alvaroaleman](https://github.com/alvaroaleman))
### Cluster Lifecycle
- kubeadm: fixed nil pointer dereference caused by a bug in url parsing ([#74454](https://github.com/kubernetes/kubernetes/pull/74454), [@bart0sh](https://github.com/bart0sh))
- CoreDNS adds readinessProbe which prevents loadbalancing to unready pods, and also allows rolling updates to work as expected. ([#74137](https://github.com/kubernetes/kubernetes/pull/74137), [@rajansandeep](https://github.com/rajansandeep))
- kubeadm no longer allows using v1alpha3 configs for anything else than converting them to `v1beta1`. ([#74025](https://github.com/kubernetes/kubernetes/pull/74025), [@rosti](https://github.com/rosti))
- kubeadm: now allows the usage of `--kubeconfig-dir` and `--config` flags on kubeadm init ([#73998](https://github.com/kubernetes/kubernetes/pull/73998), [@yagonobre](https://github.com/yagonobre))
- kubeadm: all master components are now exclusively relying on the `PriorityClassName` pod spec for annotating them as cluster critical components. Since `scheduler.alpha.kubernetes.io/critical-pod` annotation is no longer supported by Kubernetes 1.14 this annotation is no longer added to master components. ([#73857](https://github.com/kubernetes/kubernetes/pull/73857), [@ereslibre](https://github.com/ereslibre))
- kubeadm no longer dumps backtrace if it fails to remove the running containers on reset. ([#73951](https://github.com/kubernetes/kubernetes/pull/73951), [@rosti](https://github.com/rosti))
- kubeadm: fixed a bug in the underlying library for diff related to characters like '%' ([#73941](https://github.com/kubernetes/kubernetes/pull/73941), [@neolit123](https://github.com/neolit123))
- Scale max-inflight now limits together with master VM sizes. ([#73268](https://github.com/kubernetes/kubernetes/pull/73268), [@wojtek-t](https://github.com/wojtek-t))
- kubeadm reset: fixed a crash caused by the absence of a configuration file ([#73636](https://github.com/kubernetes/kubernetes/pull/73636), [@bart0sh](https://github.com/bart0sh))
- CoreDNS is now version 1.3.1 ([#73610](https://github.com/kubernetes/kubernetes/pull/73610), [@rajansandeep](https://github.com/rajansandeep))
- kubeadm: When certificates are present in joining a new control plane now ensures that they match at least the required SANs ([#73093](https://github.com/kubernetes/kubernetes/pull/73093), [@ereslibre](https://github.com/ereslibre))
- kubeadm: added back `--cert-dir` option for `kubeadm init phase certs sa` ([#73239](https://github.com/kubernetes/kubernetes/pull/73239), [@mattkelly](https://github.com/mattkelly))
- kubeadm: now explicitly waits for `etcd` to have grown when joining a new control plane ([#72984](https://github.com/kubernetes/kubernetes/pull/72984), [@ereslibre](https://github.com/ereslibre))
- kubeadm: now pulls images when joining a new control plane instance ([#72870](https://github.com/kubernetes/kubernetes/pull/72870), [@MalloZup](https://github.com/MalloZup))
- Exited kube-proxy when configuration file changes ([#59176](https://github.com/kubernetes/kubernetes/pull/59176), [@dixudx](https://github.com/dixudx))
- kube-addon-manager was updated to v9.0, and now uses kubectl v1.13.2 and prunes workload resources via the apps/v1 API ([#72978](https://github.com/kubernetes/kubernetes/pull/72978), [@liggitt](https://github.com/liggitt))
- kubeadm: Now allows certain certs/keys to be missing on the secret when transferring secrets using `--experimental-upload-certs` feature ([#75415](https://github.com/kubernetes/kubernetes/pull/75415), [@ereslibre](https://github.com/ereslibre))
### GCP
- Fixed liveness probe in fluentd-gcp cluster addon ([#74522](https://github.com/kubernetes/kubernetes/pull/74522), [@Pluies](https://github.com/Pluies))
- Reduced GCE log rotation check from 1 hour to every 5 minutes. Rotation policy is unchanged (new day starts, log file size > 100MB). ([#72062](https://github.com/kubernetes/kubernetes/pull/72062), [@jpbetz](https://github.com/jpbetz))
### Network
- Reduces the cache TTL for negative responses to 5s minimum. ([#74093](https://github.com/kubernetes/kubernetes/pull/74093), [@blakebarnett](https://github.com/blakebarnett))
### Node
- Fixed help message for `--container-runtime-endpoint`: only unix socket is support on Linux. ([#74712](https://github.com/kubernetes/kubernetes/pull/74712), [@feiskyer](https://github.com/feiskyer))
- Image garbage collection no longer fails for images with only one tag but more than one repository associated. ([#70647](https://github.com/kubernetes/kubernetes/pull/70647), [@corvus-ch](https://github.com/corvus-ch))
- Re-issued Allocate grpc calls before starting a container that requests device-plugin resources if the cached state is missing. ([#73824](https://github.com/kubernetes/kubernetes/pull/73824), [@jiayingz](https://github.com/jiayingz))
- [CRI] Added a new field called `runtime_handler` into `PodSandbox` and `PodSandboxStatus` to track the `RuntimeClass` information of a pod. ([#73833](https://github.com/kubernetes/kubernetes/pull/73833), [@haiyanmeng](https://github.com/haiyanmeng))
- Kubelet now tries to stop containers in unknown state once before restart or remove. ([#73802](https://github.com/kubernetes/kubernetes/pull/73802), [@Random-Liu](https://github.com/Random-Liu))
- When pleg channel is full, events are now discarded and count is recorded ([#72709](https://github.com/kubernetes/kubernetes/pull/72709), [@changyaowei](https://github.com/changyaowei))
- Fixed the unexpected `NotReady` status when Node's iops is full if the runtime is dockershim. ([#74389](https://github.com/kubernetes/kubernetes/pull/74389), [@answer1991](https://github.com/answer1991))
- Fixed #73264 `cpuPeriod` was not reset, but used as set via flag, although it was disabled via alpha gate ([#73342](https://github.com/kubernetes/kubernetes/pull/73342), [@szuecs](https://github.com/szuecs))
- Updated kubelet CLI summary documentation and generated webpage ([#73256](https://github.com/kubernetes/kubernetes/pull/73256), [@deitch](https://github.com/deitch))
- Set a low `oom_score_adj` for containers in pods with system-critical priorities ([#73758](https://github.com/kubernetes/kubernetes/pull/73758), [@sjenning](https://github.com/sjenning))
- kubelet: Resolved hang/timeout issues when running large numbers of pods with unique `ConfigMap/Secret` references ([#74755](https://github.com/kubernetes/kubernetes/pull/74755), [@liggitt](https://github.com/liggitt))
- Events reported for container creation, start, and stop now report the container name in the message and are more consistently formatted. ([#73892](https://github.com/kubernetes/kubernetes/pull/73892), [@smarterclayton](https://github.com/smarterclayton))
- Removed stale `OutOfDisk` condition from kubelet side ([#72507](https://github.com/kubernetes/kubernetes/pull/72507), [@dixudx](https://github.com/dixudx))
- Fixed the setting of `NodeAddresses` when using the vSphere CloudProvider and nodes that have multiple IP addresses. ([#70805](https://github.com/kubernetes/kubernetes/pull/70805), [@danwinship](https://github.com/danwinship))
- Fixed dockershim panic issues when deleting docker images. ([#75367](https://github.com/kubernetes/kubernetes/pull/75367), [@feiskyer](https://github.com/feiskyer))
- Kubelet no longer watches `ConfigMaps` and `Secrets` for terminated pods, in worst scenario causing it to not be able to send other requests to kube-apiserver ([#74809](https://github.com/kubernetes/kubernetes/pull/74809), [@oxddr](https://github.com/oxddr))
- A new `TaintNodesByCondition` admission plugin taints newly created Node objects as "not ready", to fix a race condition that could cause pods to be scheduled on new nodes before their taints were updated to accurately reflect their reported conditions. This admission plugin is enabled by default if the `TaintNodesByCondition` feature is enabled. ([#73097](https://github.com/kubernetes/kubernetes/pull/73097), [@bsalamat](https://github.com/bsalamat))
- kubelet now accepts `pid=<number>` in the `--system-reserved` and `--kube-reserved` options to ensure that the specified number of process IDs will be reserved for the system as a whole and for Kubernetes system daemons respectively. Please reference `Kube Reserved` and `System Reserved` in `Reserve Compute Resources for System Daemons` in the Kubernetes documentation for general discussion of resource reservation. To utilize this functionality, you must set the feature gate `SupportNodePidsLimit=true` ([#73651](https://github.com/kubernetes/kubernetes/pull/73651)
### Scheduling
- Improved fairness of the scheduling queue by placing pods which are attempted recently behind other pods with the same priority. ([#73700](https://github.com/kubernetes/kubernetes/pull/73700), [@denkensk](https://github.com/denkensk))
- Improved scheduler robustness to ensure that unschedulable pods are reconsidered for scheduling when appropriate. ([#73700](https://github.com/kubernetes/kubernetes/pull/73700), [#72558](https://github.com/kubernetes/kubernetes/pull/72558), [@denkensk](https://github.com/denkensk), [#73078](https://github.com/kubernetes/kubernetes/pull/73078), [@Huang-Wei](https://github.com/Huang-Wei))
### Storage
- Fixed scanning of failed iSCSI targets. ([#74306](https://github.com/kubernetes/kubernetes/pull/74306), [@jsafrane](https://github.com/jsafrane))
- StorageOS volume plugin updated to fix an issue where volume mount succeeds even if request to mount via StorageOS API fails. ([#69782](https://github.com/kubernetes/kubernetes/pull/69782), [@darkowlzz](https://github.com/darkowlzz))
- Ensured directories on volumes are group-executable when using `fsGroup` ([#73533](https://github.com/kubernetes/kubernetes/pull/73533), [@mxey](https://github.com/mxey))
- Updated CSI version to 1.1 ([#75391](https://github.com/kubernetes/kubernetes/pull/75391), [@gnufied](https://github.com/gnufied))
- Ensured that volumes get provisioned based on the zone information provided in `allowedTopologies`. ([#72731](https://github.com/kubernetes/kubernetes/pull/72731), [@skarthiksrinivas](https://github.com/skarthiksrinivas))
- Extended the `VolumeSubpathEnvExpansion` alpha feature to support environment variable expansion ([#71351](https://github.com/kubernetes/kubernetes/pull/71351), [@kevtaylor](https://github.com/kevtaylor))
- Fixed a bug that prevented deletion of dynamically provisioned volumes in Quobyte backends. ([#68925](https://github.com/kubernetes/kubernetes/pull/68925), [@casusbelli](https://github.com/casusbelli))
### Testing
- e2e storage tests now run faster and are easier to read ([#72434](https://github.com/kubernetes/kubernetes/pull/72434), [@pohly](https://github.com/pohly))
- `e2e.test` now rejects unknown `--provider` values instead of merely warning about them. An empty provider name is not accepted anymore and was replaced by `skeleton` (a provider with no special behavior). ([#73402](https://github.com/kubernetes/kubernetes/pull/73402), [@pohly](https://github.com/pohly))
- Updated to go1.11.5 ([#73326](https://github.com/kubernetes/kubernetes/pull/73326), [@ixdy](https://github.com/ixdy))
- Updated to use go1.12.1 ([#75413](https://github.com/kubernetes/kubernetes/pull/75413), [@BenTheElder](https://github.com/BenTheElder))
- e2e tests that require SSH may now be used against clusters that have nodes without external IP addresses by setting the environment variable `KUBE_SSH_BASTION` to the `host:port` of a machine that is allowed to SSH to those nodes. The same private key that the test would use is used for the bastion host. The test connects to the bastion and then tunnels another SSH connection to the node. ([#72286](https://github.com/kubernetes/kubernetes/pull/72286), [@smarterclayton](https://github.com/smarterclayton))
- `PidPressure` now evicts pods from lowest priority to highest priority ([#72844](https://github.com/kubernetes/kubernetes/pull/72844), [@dashpole](https://github.com/dashpole))
- Split up the mondo `kubernetes-test` tarball into `kubernetes-test-portable` and `kubernetes-test-{OS}-{ARCH}` tarballs. ([#74065](https://github.com/kubernetes/kubernetes/pull/74065), [@ixdy](https://github.com/ixdy))
### VMware
- Applied zone labels to vSphere Volumes automatically. The zone labels are visible on the PV: ([#72687](https://github.com/kubernetes/kubernetes/pull/72687), [@subramanian-neelakantan](https://github.com/subramanian-neelakantan))
### Windows
Support for Windows nodes and Windows containers went going stable.
Support for Group Managed Service Accounts (GMSA) for Windows containers in Kubernetes. GMSA are a specific type of Active Directory account that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate the management to other administrators across multiple servers.
- Fixed smb remount and unmount issues on Windows ([#73661](https://github.com/kubernetes/kubernetes/pull/73661), [@andyzhangx](https://github.com/andyzhangx), [#75087](https://github.com/kubernetes/kubernetes/pull/75087), [@andyzhangx](https://github.com/andyzhangx))
- Added network stats for Windows nodes and containers ([#74788](https://github.com/kubernetes/kubernetes/pull/74788), [@feiskyer](https://github.com/feiskyer))
- The new test `[sig-network] DNS should now provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]` will validate the host entries set in the ``/etc/hosts`` file (pod's FQDN and hostname), which should be managed by Kubelet. ([#72729](https://github.com/kubernetes/kubernetes/pull/72729), [@bclau](https://github.com/bclau))
- Allowed the kubelet to pass Windows GMSA credentials down to Docker ([#73726](https://github.com/kubernetes/kubernetes/pull/73726), [@wk8](https://github.com/wk8))
- Added kube-proxy support for overlay networking and DSR in Windows and new flags for `network-name`, `source-vip`, and `enable-dsr`. ([#70896](https://github.com/kubernetes/kubernetes/pull/70896), [@ksubrmnn](https://github.com/ksubrmnn))
- windows: Ensured graceful termination when being run as windows service ([#73292](https://github.com/kubernetes/kubernetes/pull/73292), [@steffengy](https://github.com/steffengy))
- vSphere cloud provider now correctly retrieves the VM's UUID when running on Windows ([#71147](https://github.com/kubernetes/kubernetes/pull/71147), [@benmoss](https://github.com/benmoss))
- Kubelet: added `usageNanoCores` from CRI stats provider ([#73659](https://github.com/kubernetes/kubernetes/pull/73659), [@feiskyer](https://github.com/feiskyer))
- Introduced support for Windows nodes into the cluster bringup scripts for GCE. ([#73442](https://github.com/kubernetes/kubernetes/pull/73442), [@pjh](https://github.com/pjh))
- Added network stats for Windows nodes and pods. ([#70121](https://github.com/kubernetes/kubernetes/pull/70121), [@feiskyer](https://github.com/feiskyer))
- CoreDNS is only officially supported on Linux at this time. As such, when kubeadm is used to deploy this component into your kubernetes cluster, it will be restricted (using `nodeSelectors`) to run only on nodes with that operating system. This ensures that in clusters which include Windows nodes, the scheduler will not ever attempt to place CoreDNS pods on these machines, reducing setup latency and enhancing initial cluster stability. ([#69940](https://github.com/kubernetes/kubernetes/pull/69940), [@MarcPow](https://github.com/MarcPow))
## External Dependencies
- Default etcd server and client have been updated to v3.3.10. ([#71615](https://github.com/kubernetes/kubernetes/pull/71615), [#70168](https://github.com/kubernetes/kubernetes/pull/70168))
- The list of validated docker versions has changed. 1.11.1 and 1.12.1 have been removed. The current list is 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. ([#72823](https://github.com/kubernetes/kubernetes/pull/72823), [#72831](https://github.com/kubernetes/kubernetes/pull/72831))
- The default Go version was updated to 1.12.1. ([#75422](https://github.com/kubernetes/kubernetes/pull/75422))
- CNI has been updated to v0.7.5 ([#75455](https://github.com/kubernetes/kubernetes/pull/75455))
- CSI has been updated to v1.1.0. ([#75391](https://github.com/kubernetes/kubernetes/pull/75391))
- The dashboard add-on has been updated to v1.10.1. ([#72495](https://github.com/kubernetes/kubernetes/pull/72495))
- Cluster Autoscaler has been updated to v1.14.0 ([#75480](https://github.com/kubernetes/kubernetes/pull/75480))
- kube-dns is unchanged at v1.14.13 since Kubernetes 1.12 ([#68900](https://github.com/kubernetes/kubernetes/pull/68900))
- Influxdb is unchanged at v1.3.3 since Kubernetes 1.10 ([#53319](https://github.com/kubernetes/kubernetes/pull/53319))
- Grafana is unchanged at v4.4.3 since Kubernetes 1.10 ([#53319](https://github.com/kubernetes/kubernetes/pull/53319))
- Kibana has been upgraded to v6.6.1. ([#71251](https://github.com/kubernetes/kubernetes/pull/71251))
- CAdvisor has been updated to v0.33.1 ([#75140](https://github.com/kubernetes/kubernetes/pull/75140))
- fluentd-gcp-scaler is unchanged at v0.5.0 since Kubernetes 1.13 ([#68837](https://github.com/kubernetes/kubernetes/pull/68837))
- Fluentd in fluentd-elasticsearch has been upgraded to v1.3.3 ([#71180](https://github.com/kubernetes/kubernetes/pull/71180))
- fluentd-elasticsearch has been updated to v2.4.0 ([#71180](https://github.com/kubernetes/kubernetes/pull/71180))
- The fluent-plugin-kubernetes_metadata_filter plugin in fluentd-elasticsearch has been updated to v2.1.6 ([#71180](https://github.com/kubernetes/kubernetes/pull/71180))
- fluentd-gcp is unchanged at v3.2.0 since Kubernetes 1.13 ([#70954](https://github.com/kubernetes/kubernetes/pull/70954))
- OIDC authentication is unchanged at coreos/go-oidc v2 since Kubernetes 1.10 ([#58544](https://github.com/kubernetes/kubernetes/pull/58544))
- Calico is unchanged at v3.3.1 since Kubernetes 1.13 ([#70932](https://github.com/kubernetes/kubernetes/pull/70932))
- crictl on GCE is unchanged at v1.12.0 since Kubernetes 1.13 ([#69033](https://github.com/kubernetes/kubernetes/pull/69033))
- CoreDNS has been updated to v1.3.1 ([#73610](https://github.com/kubernetes/kubernetes/pull/73610))
- event-exporter has been updated to v0.2.3 ([#67691](https://github.com/kubernetes/kubernetes/pull/67691))
- Es-image has been updated to Elasticsearch 6.6.1 ([#71252](https://github.com/kubernetes/kubernetes/pull/71252))
- metrics-server remains unchanged at v0.3.1 since Kubernetes 1.12 ([#68746](https://github.com/kubernetes/kubernetes/pull/68746))
- GLBC remains unchanged at v1.2.3 since Kubernetes 1.12 ([#66793](https://github.com/kubernetes/kubernetes/pull/66793))
- Ingress-gce remains unchanged at v1.2.3 since Kubernetes 1.12 ([#66793](https://github.com/kubernetes/kubernetes/pull/66793))
- ip-masq-agen remains unchanged at v2.1.1 since Kubernetes 1.12 ([#67916](https://github.com/kubernetes/kubernetes/pull/67916))
| 127.146172 | 753 | 0.774215 | eng_Latn | 0.517532 |
1de7a2faeba0a0eab158d3ff6a253700ba252952 | 1,908 | md | Markdown | README.md | tdolbniak/dbox-pwd | 45929df6cdd29ccb41b09112179870f98822c80d | [
"MIT"
] | 1 | 2019-02-02T11:49:15.000Z | 2019-02-02T11:49:15.000Z | README.md | tdolbniak/dbox-pwd | 45929df6cdd29ccb41b09112179870f98822c80d | [
"MIT"
] | null | null | null | README.md | tdolbniak/dbox-pwd | 45929df6cdd29ccb41b09112179870f98822c80d | [
"MIT"
] | null | null | null | # Dropbox-like password hasher
Inspired by the way Dropbox crew stores their passwords https://blogs.dropbox.com/tech/2016/09/how-dropbox-securely-stores-your-passwords/
# API description
This library has a very simple API, only two functions, both documented in `index.js` file in the root directory.
`encrypt(input, password, bcryptRounds, cipherType)`
This function takes the `input` string and creates a bcrypt hash using `bcryptRounds` param value. The bigger the value, the longer it takes to calculate the hash(the salt to be more exact). This makes it secure and resistant to brute force attacks.
The calculated hash is then encrypted with selected cipher and a password. You can pass any of the supported node.js cipher type. The resulting value is returned via a promise. In case of any error - the promise rejects with an error message.
`compare(input, encryptedHash, password, cipherType)`
This function first decrypts the `encryptedHash`. Obviously the `password` and `cipherType` should match the pair used to encrypt the hash. When it's successfully decrypted, bcrypt compares the plain text `input` with the decrypted hash value. If those values match, the promise returned from `compare()` resolves with `true`, otherwise - `false`. The promise rejects if an error occurs in any of the described steps.
The Dropbox model assumes that each user's password has its own, cryptographically strong salt and that it's stored with the hash. The salt generation is provided by bcrypt itself. This means that wherever you store your users' data you just need to save one token that consists of the hash and salt in a single string.
To make the storage more secure, a global pepper - `password` - is applied to encrypt all users' hashes. To lift the passwords storage to another level you can rotate the global pepper periodically and re-encrypt all passwords once in a while with a new one.
| 95.4 | 417 | 0.790881 | eng_Latn | 0.999138 |
1de8fd37fbcb2271e07c611fc83e8603a355e64e | 2,252 | md | Markdown | _posts/2018-08-18-git-use.md | hannoch/hannoch.github.io | 656765ff023b17a19cadb96db69a4026068fadca | [
"MIT"
] | null | null | null | _posts/2018-08-18-git-use.md | hannoch/hannoch.github.io | 656765ff023b17a19cadb96db69a4026068fadca | [
"MIT"
] | null | null | null | _posts/2018-08-18-git-use.md | hannoch/hannoch.github.io | 656765ff023b17a19cadb96db69a4026068fadca | [
"MIT"
] | null | null | null | ---
layout: post
title: "Git的使用附常用命令速查表"
categories: GitHub
tags: Git
author: Hannoch
---
* content
{:toc}
# 简单使用git
1 通过git clone xxx.@git
将代码down到本地
2 当你修改或者添加文件的时候
3 首先使用git status 查看文件状态,
4 然后使用git add . 将文件add到本地缓冲区
5 再提交到本地仓库:git commit -m “提交记录内容”
6 最后push到githu上面去:git push origin master
在push到上面去的时候会要求我们的账号跟密码,这时候填入即可,最后会出现一张push成功的图片,然后在github仓库上看一下就能看到我们提交的内容记录了(可以参考上篇文章)[http://www.jianshu.com/p/71fbf000b0e7]
当然了,如果是我们自己的仓库可以不用每次git pull 一下远程仓库
如果团队协作的话,先从远程更新到本地,再本地进行修改,也就是merge操作,最后把修改好的文件按照git add , 等等操作进行文件的上传到github上面去
```
git pull 获取新版本
git status
git add ...
git commit -m "add new files"
git push -u origin master
```
# Git 常用命令速查表
## 创建版本库
```
git clone <url> #克隆远程版本库
git init #初始化本地版本库
```
## 修改和提交
```
git status #查看状态
git diff #查看变更内容
git add . #跟踪所有改动过的文件
git add <file> #跟踪指定的文件
git mv <old><new> #文件改名
git rm<file> #删除文件
git rm --cached<file> #停止跟踪文件但不删除
git commit -m "commit messages" #提交所有更新过的文件
git commit --amend #修改最后一次改动
```
## 查看提交历史
```
git log #查看提交历史
git log -p <file> #查看指定文件的提交历史
git blame <file> #以列表方式查看指定文件的提交历史
```
## 撤销
```
git reset --hard HEAD #撤销工作目录中所有未提交文件的修改内容
git checkout HEAD <file> #撤销指定的未提交文件的修改内容
git revert <commit> #撤销指定的提交
git log --before="1 days" #退回到之前1天的版本
```
##分支与标签
```
git branch #显示所有本地分支
git checkout <branch/tag> #切换到指定分支和标签
git branch <new-branch> #创建新分支
git branch -d <branch> #删除本地分支
git tag #列出所有本地标签
git tag <tagname> #基于最新提交创建标签
git tag -d <tagname> #删除标签
```
## 合并与衍合
```
git merge <branch> #合并指定分支到当前分支
git rebase <branch> #衍合指定分支到当前分支
```
## 远程操作
```
git remote -v #查看远程版本库信息
git remote show <remote> #查看指定远程版本库信息
git remote add <remote> <url> #添加远程版本库
git fetch <remote> #从远程库获取代码
git pull <remote> <branch> #下载代码及快速合并
git push <remote> <branch> #上传代码及快速合并
git push <remote> :<branch/tag-name> #删除远程分支或标签
git push --tags #上传所有标签
```
| 20.288288 | 127 | 0.594583 | yue_Hant | 0.347331 |
1de9c8bbab48522bc00cc5c209aa597e3b0307a3 | 810 | md | Markdown | README.md | Koada-op/paper-input-location | 05ebc8ca59b7cd0282d312445696af19d7355e9d | [
"Apache-2.0"
] | 2 | 2017-02-08T17:08:13.000Z | 2017-02-17T21:40:56.000Z | README.md | Koada-op/paper-input-location | 05ebc8ca59b7cd0282d312445696af19d7355e9d | [
"Apache-2.0"
] | 2 | 2017-03-02T11:50:47.000Z | 2017-03-06T15:52:13.000Z | README.md | Koada-op/paper-input-location | 05ebc8ca59b7cd0282d312445696af19d7355e9d | [
"Apache-2.0"
] | 1 | 2017-02-10T05:54:08.000Z | 2017-02-10T05:54:08.000Z | [![Published on webcomponents.org](https://img.shields.io/badge/webcomponents.org-published-blue.svg)](https://www.webcomponents.org/element/MacTheZazou/paper-input-location)
##<paper-input-location>
Material design: Text fields with location autocomplete
`<paper-input-location>` is a single-line text field with Material Design styling and location autocomplete based on the Google Maps API.
Example:
<!--
```
<custom-element-demo>
<template>
<link rel="import" href="paper-input-location.html">
<style>
paper-input-location {
height: 230px;
}
</style>
<next-code-block></next-code-block>
</template>
</custom-element-demo>
```
-->
```html
<paper-input-location maps-api-key="AIzaSyD3E1D9b-Z7ekrT3tbhl_dy8DCXuIuDDRc" label="Place"></paper-input-location>
```
| 27.931034 | 174 | 0.717284 | eng_Latn | 0.478771 |
1deaedb87193834d1328f06f41b9ae4f675dd420 | 723 | markdown | Markdown | _posts/2021-07-20-mongo.markdown | HyungwonJin/HyungwonJin.github.io | be7a3e5c69522cc457923560fce8b49f8c3baf48 | [
"MIT"
] | null | null | null | _posts/2021-07-20-mongo.markdown | HyungwonJin/HyungwonJin.github.io | be7a3e5c69522cc457923560fce8b49f8c3baf48 | [
"MIT"
] | null | null | null | _posts/2021-07-20-mongo.markdown | HyungwonJin/HyungwonJin.github.io | be7a3e5c69522cc457923560fce8b49f8c3baf48 | [
"MIT"
] | null | null | null | ---
layout: post
title: "[mongoDB] 서버에 연결이 안될 때"
subtitle: "When mongo can't access server"
categories: develop
tags: back
comments: true
---
## macOS에서 local 서버에 연결이 안될 때
```
MongoDB shell version v5.0.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:372:17
@(connect):2:6
exception: connect failed
exiting with code 1
```
이렇게 뜨면
```javascript
**인텔맥**: mongod --config /usr/local/etc/mongod.conf --fork
**M1**: mongod --config /opt/homebrew/etc/mongod.conf --fork
``` | 30.125 | 167 | 0.723375 | eng_Latn | 0.626874 |
1deb43c41b316b323c4334fef7bdae1cb024b0fb | 156 | md | Markdown | README.md | quxiaofeng/rails.ml | 7c6b4a20fa7bedb5babf5e1cb8b61eb3f2a7e565 | [
"MIT"
] | null | null | null | README.md | quxiaofeng/rails.ml | 7c6b4a20fa7bedb5babf5e1cb8b61eb3f2a7e565 | [
"MIT"
] | null | null | null | README.md | quxiaofeng/rails.ml | 7c6b4a20fa7bedb5babf5e1cb8b61eb3f2a7e565 | [
"MIT"
] | null | null | null | # Rails.ML
[![Build Status](https://travis-ci.org/quxiaofeng/rails.ml.svg)](https://travis-ci.org/quxiaofeng/rails.ml)
[www.rails.ml](http://www.rails.ml) | 31.2 | 107 | 0.711538 | kor_Hang | 0.200361 |
1debea097fc31b91ff53ece7dc8d933513ceb3c4 | 80 | md | Markdown | _indicators/ru/3-4-2.md | LucyGwilliamAdmin/sdg-site-armenia | c6b5238a0a9c34759563e9b08d256346635fc997 | [
"MIT"
] | 1 | 2021-07-23T18:07:46.000Z | 2021-07-23T18:07:46.000Z | _indicators/ru/3-4-2.md | LucyGwilliamAdmin/sdg-site-armenia | c6b5238a0a9c34759563e9b08d256346635fc997 | [
"MIT"
] | 15 | 2019-01-25T11:49:58.000Z | 2019-05-27T14:24:06.000Z | _indicators/ru/3-4-2.md | LucyGwilliamAdmin/sdg-site-armenia | c6b5238a0a9c34759563e9b08d256346635fc997 | [
"MIT"
] | 8 | 2019-01-21T11:08:45.000Z | 2020-08-14T12:43:27.000Z | ---
indicator: 3.4.2
layout: indicator
permalink: /ru/3-4-2/
language: ru
---
| 8.888889 | 21 | 0.6375 | eng_Latn | 0.20663 |
1ded929b8fd0bb369c49bb9a510bb94113829998 | 1,130 | md | Markdown | add/metadata/System.Web.UI.DataVisualization.Charting/CustomizeLegendEventArgs.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/System.Web.UI.DataVisualization.Charting/CustomizeLegendEventArgs.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/System.Web.UI.DataVisualization.Charting/CustomizeLegendEventArgs.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-06T09:36:01.000Z | 2021-01-06T09:36:01.000Z | ---
uid: System.Web.UI.DataVisualization.Charting.CustomizeLegendEventArgs
ms.technology:
- "dotnet-webforms"
ms.author: "mblythe"
manager: "mblythe"
---
---
uid: System.Web.UI.DataVisualization.Charting.CustomizeLegendEventArgs.#ctor(System.Web.UI.DataVisualization.Charting.LegendItemsCollection)
ms.technology:
- "dotnet-webforms"
ms.author: "mblythe"
manager: "mblythe"
---
---
uid: System.Web.UI.DataVisualization.Charting.CustomizeLegendEventArgs.LegendName
ms.technology:
- "dotnet-webforms"
ms.author: "mblythe"
manager: "mblythe"
---
---
uid: System.Web.UI.DataVisualization.Charting.CustomizeLegendEventArgs.#ctor(System.Web.UI.DataVisualization.Charting.LegendItemsCollection,System.String)
ms.technology:
- "dotnet-webforms"
ms.author: "mblythe"
manager: "mblythe"
---
---
uid: System.Web.UI.DataVisualization.Charting.CustomizeLegendEventArgs.LegendItems
ms.technology:
- "dotnet-webforms"
ms.author: "mblythe"
manager: "mblythe"
---
---
uid: System.Web.UI.DataVisualization.Charting.CustomizeLegendEventArgs.#ctor
ms.technology:
- "dotnet-webforms"
ms.author: "mblythe"
manager: "mblythe"
---
| 23.541667 | 154 | 0.767257 | kor_Hang | 0.20295 |
1dedb8f9c03c43638090b9aa4c5bcbd60e6d65bf | 9,397 | md | Markdown | _posts/2020-1-28-hagakure.md | Ashkanph/blog | 9d145082178a13b0523833b06fa25c4091e945d2 | [
"MIT"
] | null | null | null | _posts/2020-1-28-hagakure.md | Ashkanph/blog | 9d145082178a13b0523833b06fa25c4091e945d2 | [
"MIT"
] | null | null | null | _posts/2020-1-28-hagakure.md | Ashkanph/blog | 9d145082178a13b0523833b06fa25c4091e945d2 | [
"MIT"
] | null | null | null | ---
layout: post
title: هاگاکوره
dir: rtl
jdate: سهشنبه، ۸ بهمن ۱۳۹۸
categories: [literature]
tags: [from books]
---
کتاب هاگاکوره، گزینگویههای یاماموتو تسونهتومو است که پس از مرگ اربابش از دنیا کنارهگیری کرد و گفتههایش توسط یک سامورایی جوان در این کتاب گردآوری شدهاست. این کتاب در سالهای ۱۷۰۹ تا ۱۷۱۶ میلادی نوشته شد اما تا سالهای پس از آن منتشر نشد. اوج محبوبیت این کتاب در سالهای جنگ جهانی دوم بود که سربازان ژاپنی آنرا به عنوان کتاب طریقت سامورایی به همراه داشتند.
---
بخشهای انگلیسی که در زیر آمده از برگردان انگلیسی زیر انتخاب شدهاند:
<div class="english-text">
Hagakure: The Secret Wisdom of the Samurai<br>
translated by Alexander Bennett<br>
Tuttle publishing, 2014<br>
</div>
برگردان فارسی هم متعلق به کتاب زیر است:<br>
هاگاکوره، کتاب سامورایی<br>
مترجم: سید رضا حسینی<br>
نشر چشمه. چاپ دوم ۱۳۸۹.<br>
<hr>
<div class="english-text">
Shida Kichinosuke said: “If it won’t damage your reputation whether you live or die, then you should live.” This is an oxymoron. He also said: “When you wonder if you should go or not, don’t go.” And: “When you wonder if you should eat or not, it is better not to eat. When you wonder if you should die or not, it is better to die.”<br><br>
From the first book
</div>
<hr>
<div class="english-text">
In the Kamigata region, people carry multi-layered picnic boxes with them for enjoying the cherry blossoms. They are only used for one day, and having served their purpose, people just stamp on the boxes and discard them as they leave. It is indeed a capital conception. The end is important for all things.<br><br>
From the second book
</div>
<hr>
<div class="english-text">
While walking together along the path, Master Jōchō proclaimed, “Are men not like masterfully controlled puppets? It is magnificent craftsmanship that allows us to walk, jump, prance, and speak even though there are no strings attached. We may be guests at next year’s Bon festival. We forget it is an ephemeral world in which we live.”<br><br>
From the second book
</div>
<hr>
<div class="english-text">
It is discerning to behold the world as if it were a dream. You want to quickly wake up if you have a nightmare, and are relieved that it was only a dream. This world in which we exist now is no different.<br><br>
From the second book
</div>
<hr>
<div class="english-text">
Two warriors met on a one-lane bridge but refused to give way, threatening to cut each other down if the other did not move. A radish seller came between the two men, and catching each one on either end of his shoulder-carrying pole, picked them up and spun them around to the opposite ends of the bridge. There are many ways of solving problems, and this counts as constructive service to one’s lord. It is most unfortunate to see precious retainers die needlessly, or create needless discord.<br><br>
From the second book
</div>
<hr>
<div class="english-text">
The original intention of love is to take it with you to the grave. There is a poem that goes, "Observe when I am dead, my internal burning love for you, from the smoke ascending from my body." When I suggested that this was analogous to the highest form of love, they all concurred, and thereupon we called ourselves the 'smoke blokes'.
<br><br>
From the second book
</div>
ناخودآگاه به یاد این شعر حافظ افتادم:<br><br>
بگشای تربتم را بعد از وفات و بنگر / کز آتش درونم دود از کفن برآید
<hr>
<div class="english-text">
With the passing of Lord Kōkokuin, his aide Ezoe Kinbei took his ashes to Mount Kōya to be consecrated. Kinbei then shut himself away in a hermitage and carved a figurine of his master from wood, and another of him prostrating before him. Kinbei returned home, probably on the first anniversary of his death, and committed oibara. The statue he carved was brought from Mount Kōya and enshrined at the Kōdenji Temple in Saga.<br><br>
From the third book
</div>
<hr>
<div class="english-text">
Written in the Gunpō-kikisho, “Win first, then attack” is the essence of certain victory. Resourcefulness in peacetime means preparing for war. You must be able to defeat an enemy of 100,000 men with a force of 500.<br><br>
From the third book
</div>
<hr>
<div class="english-text">
Warriors in olden times used to grow mustaches because their ears and noses would be removed and taken by the enemy as proof of their triumph in battle. The mustache was cut off together with the nose to confirm that the trophy head was that of a man and not a woman. If the head was found to be clean-shaven, it was just left to rot in the mud. A samurai cultivated his drooping mustache to ensure that his head, once removed, was not unceremoniously discarded. Master Jōchō said, “A man’s complexion will not change after being killed in battle, so long as he washes his face every morning.”
<br><br>
From the third book
</div>
جنگجویان قدیم سبیل میگذاشتند، چرا که در گذشته به علامت آنکه یک مرد در جنگ کشته شده است، گوش و دماغش را میبریدند و به اردوی خود میبردند، و برای آنکه آن شخص با یک زن اشتباه گرفته نشود،سبیلش را نیز به همراه بینیاش میبریدند. در این هنگام اگر بر صورت وی سبیل نبود، سر جنگجو را به دور میانداختند تا با سر یک زن اشتباه نشود. از اینرو، سبیل گذاشتن یکی از طریقت ساموراییها بود تا بدینسان هیچگاه سرشان پس از مرگ بهدور افکنده نشود.<br>
چونهتومو گفته است، اگر انسان صورت خود را هر روز با آب بشوید، پس از کشتهشدن، صورتش رنگ و رخسار خود را از دست نخواهد داد.
<hr>
<div class="english-text">
A certain man said, “There is a poem from the shrine [of Sugawara no Michizane] that goes: ‘If one follows the path of sincerity in his heart, although he may not pray, will the deities not watch over him?’ What can ‘path of sincerity’ possibly mean?” Another man answered. “As you appear to be partial to poetry, allow me to respond in verse. ‘Inasmuch as all things in this world are deceptive, sincerity is revealed only in death.’ Living as if already dead is how to embody the path of sincerity.”<br><br>
From the third book
</div>
<hr>
<div class="english-text">
When the priest Ungo Oshō from Matsushima was traversing through the mountains one evening, he was ambushed by a bandit. Ungo exclaimed: “I am from this region. I am no itinerant priest. I have no money. I will give you the clothes I wear, but entreat you not to take my life.” The bandit reacted, “This is a wasted effort. I have no need for clothes,” and moved on. After walking a distance of around 1-chō, Ungo turned and hailed him back. “I have broken my vows by telling an untruth. As I was so flustered, I forgot about this piece of silver in my purse, even though I claimed to have no money. Do not be angry with me. Here it is, please accept it.” The bandit was awestruck by his admission. He shaved off his hair on the spot, and became a disciple.<br><br>
From the third book
</div>
<hr>
<div class="english-text">
The following are teachings of Yamamoto Jin’uemon.<br><br>
1. Young men should not engage in poetry, reading graphic novels, gō, shōgi, or other such activities that will cause listlessness. Members of the Nakano clan should carry an oaken sword and hone their military preparedness for service.<br>
2. Anything is achievable through single-minded endeavor (bannō-isshin).<br>
3. Dog skin inside, tiger hide outside.<br>
4. The end phrase of a letter will not wear your brush out. You won’t break your back by bowing politely. (You can never be too polite.)<br>
5. Be sure to secure even a broiled chicken. (You should not let your guard down under any circumstance.)<br>
6. Whip even a galloping horse. (Don’t take things for granted, especially if they seem to be going well.)<br>
7. A man who asks questions candidly to your face holds no malice.<br>
8. A man lives for one generation, but a name forever.<br>
9. Money is there for the asking, but [good] men are not.<br>
10. A man who feigns laughter is a coward. A woman who does so is prurient.<br>
11. A real man will be able to tell tall tales seven times in 1-chō. (1-chō = 358 feet (110m).)<br>
12. It is not rude to ask even when you know the answer, but an imperative if you don’t.<br>
13. If you can see one direction, you can see eight. (As long as you are careful in your observance, you will be able to perceive all things.)<br>
14. If you know one truth, you will awaken to everything.<br>
15. Wrap your will in pine needles. (If you are sincere, the gifts you send as tribute are allowed to be small as it’s the thought that counts.)<br>
16. A trustworthy man is a kusemono (heroic warrior).<br>
17. Don’t insert your hands in the sides of your hakama. It is careless.<br>
18. Don’t open your mouth and yawn in front of others. Conceal it behind your sleeve or fan.<br>
19. A straw hat or kabuto should be worn with the front part low. (To conceal where one is looking.)<br>
20. When he was dying he said: “Everyone knows the name of Jin’uemon. It would be regrettable to groan because of the pain.” He never groaned until the very end.<br><br>
From the third book
</div>
<hr>
<div class="english-text">
Matsudaira Izu-no-Kami said to Mizuno Kenmotsu [Tadayoshi]:450 “You are an expedient man, but it is a pity you are so short.” Kenmotsu retorted: “Indeed it is true. Some things turn out contrary to one’s liking. I would be a little taller if I was to chop your head off and attach it to my feet, but I am unable to have it my way.”<br><br>
From the third book
</div>
| 51.070652 | 765 | 0.752261 | eng_Latn | 0.999265 |
1dedbfa58e4e69e8038884bae85d2f743ab97abe | 260 | md | Markdown | README.md | shannongamby/fizzbuzz_javascript | fef494de01dd6e6118ee8a4a756b093efe32a340 | [
"MIT"
] | null | null | null | README.md | shannongamby/fizzbuzz_javascript | fef494de01dd6e6118ee8a4a756b093efe32a340 | [
"MIT"
] | null | null | null | README.md | shannongamby/fizzbuzz_javascript | fef494de01dd6e6118ee8a4a756b093efe32a340 | [
"MIT"
] | null | null | null | ## FizzBuzz in Javascript
I used `jasmine` to test-drive FizzBuzz in a new-to-me language.
#### Installing dependencies:
- After cloning this repo, run `npm install` on the command line
#### Running the tests:
- Type `open SpecRunner.html` on the command line
| 37.142857 | 64 | 0.742308 | eng_Latn | 0.979351 |
1def2717d2c46fb48bf57c190d534a948c6bc01a | 50 | md | Markdown | edge/Address Bar Spoof I/README.md | FingerLeakers/uxss-db | 02302879c9b27f6f92c381abc52d56e227fcb610 | [
"MIT"
] | 654 | 2018-01-26T07:18:21.000Z | 2022-03-29T07:25:34.000Z | edge/Address Bar Spoof I/README.md | vngkv123/uxss-db | ee66d52bf0b3ef1372f2c0a9ee249de4a3df0064 | [
"MIT"
] | 6 | 2018-08-03T04:28:05.000Z | 2018-10-10T21:37:10.000Z | edge/Address Bar Spoof I/README.md | vngkv123/uxss-db | ee66d52bf0b3ef1372f2c0a9ee249de4a3df0064 | [
"MIT"
] | 104 | 2018-01-27T08:00:22.000Z | 2022-03-11T09:23:15.000Z | Link: https://www.cracking.com.ar/demos/edgespoof/ | 50 | 50 | 0.78 | kor_Hang | 0.412912 |
1def889141168b3c524b85b7552688b987689ea7 | 574 | md | Markdown | CHANGELOG.md | AndreyKotovich/pay-with-amazon | 6cc6aae41c22c3cd6f90a5003ee6d8c65fbf1b64 | [
"Unlicense",
"MIT"
] | null | null | null | CHANGELOG.md | AndreyKotovich/pay-with-amazon | 6cc6aae41c22c3cd6f90a5003ee6d8c65fbf1b64 | [
"Unlicense",
"MIT"
] | null | null | null | CHANGELOG.md | AndreyKotovich/pay-with-amazon | 6cc6aae41c22c3cd6f90a5003ee6d8c65fbf1b64 | [
"Unlicense",
"MIT"
] | 1 | 2021-04-20T20:31:17.000Z | 2021-04-20T20:31:17.000Z | ## Pay with Amazon CHANGELOG
### Version 1.0.4 (February 13, 2017)
- Fixes initialization API availability check
- Adds login response payload to 'login' event
### Version 1.0.3 (December 25, 2014)
- Adds production option
### Version 1.0.2 (October 9, 2014)
- Adds class-based style support, to allow for responsive widget sizing
- Adds openedClass option
### Version 1.0.1 (September 17, 2014)
- Switches build system to Duo
- Adds new events: 'login', 'ready.addressBook', 'ready.wallet', 'ready.consent'
### Version 1.0.0 (September 3, 2014)
- Initial release
| 22.96 | 80 | 0.714286 | eng_Latn | 0.762745 |
1defa980322c7cda93eaf9043e3b49c81fa3a6a6 | 36 | md | Markdown | INSTALL.md | loxiuras/Nova | e2810da07c71dba021c78901c03546b2206cf75c | [
"MIT"
] | 1 | 2021-08-11T16:34:43.000Z | 2021-08-11T16:34:43.000Z | INSTALL.md | loxiuras/Nova | e2810da07c71dba021c78901c03546b2206cf75c | [
"MIT"
] | null | null | null | INSTALL.md | loxiuras/Nova | e2810da07c71dba021c78901c03546b2206cf75c | [
"MIT"
] | null | null | null | ## More information is coming soon.
| 18 | 35 | 0.75 | eng_Latn | 0.999461 |
1defd7392c7fe49ed519247596a878a2443b6c78 | 4,410 | md | Markdown | packages/boundless-input/README.md | enigma-io/boundless | 0db516a4319a1fae6ff7df602d7a121dc5997b6c | [
"MIT"
] | 262 | 2017-01-25T20:12:18.000Z | 2020-04-28T21:22:48.000Z | packages/boundless-input/README.md | enigma-io/boundless | 0db516a4319a1fae6ff7df602d7a121dc5997b6c | [
"MIT"
] | 34 | 2017-01-25T19:53:18.000Z | 2019-03-19T10:35:36.000Z | packages/boundless-input/README.md | enigma-io/boundless | 0db516a4319a1fae6ff7df602d7a121dc5997b6c | [
"MIT"
] | 13 | 2017-02-07T03:41:21.000Z | 2019-09-15T17:50:30.000Z | <!---
THIS IS AN AUTOGENERATED FILE. EDIT PACKAGES/BOUNDLESS-INPUT/INDEX.JS INSTEAD.
-->
# Input
Input abstracts away the cross-platform differences of placeholder styling and behaviors, for example: Internet Explorer dismisses native placeholders on input focus and other platforms do not. This component ensures that text input controls will feel and behave similarly on more devices.
## Component Instance Methods
When using `Input` in your project, you may call the following methods on a rendered instance of the component. Use [`refs`](https://facebook.github.io/react/docs/refs-and-the-dom.html) to get the instance.
- __getValue()__
returns the current value of the input field
- __setValue(string)__
programmatically set the input value; useful for clearing out the input in "uncontrolled" mode -- note that digging into the internals and setting the `refs.field.value = ''` directly will not trigger events and messes up the internal state of the component
## Installation
```bash
npm i boundless-input --save
```
Then use it like:
```jsx
/** @jsx createElement */
import { createElement, PureComponent } from 'react';
import Input from 'boundless-input';
export default class InputDemo extends PureComponent {
state = {
input: '',
}
handleChange = (e) => this.setState({ input: e.target.value })
render() {
return (
<div className='spread'>
<div>
<h5>hidePlaceholderOnFocus="false"</h5>
<Input
hidePlaceholderOnFocus={false}
inputProps={{
placeholder: 'Start typing and I disappear!',
}} />
</div>
<div style={{ marginLeft: '1em' }}>
<h5>hidePlaceholderOnFocus="true"</h5>
<Input
hidePlaceholderOnFocus={true}
inputProps={{
placeholder: 'Focus on me and I disappear!',
}} />
</div>
<div style={{ marginLeft: '1em' }}>
<h5>"controlled" input</h5>
<Input
hidePlaceholderOnFocus={true}
inputProps={{
placeholder: 'Focus on me and I disappear!',
onChange: this.handleChange,
value: this.state.input,
}} />
</div>
</div>
);
}
}
```
Input can also just be directly used from the main [Boundless library](https://www.npmjs.com/package/boundless). This is recommended when you're getting started to avoid maintaining the package versions of several components:
```bash
npm i boundless --save
```
the ES6 `import` statement then becomes like:
```js
import { Input } from 'boundless';
```
## Props
> Note: only top-level props are in the README, for the full list check out the [website](https://boundless.js.org/Input).
### Required Props
There are no required props.
### Optional Props
- __`*`__ · any [React-supported attribute](https://facebook.github.io/react/docs/tags-and-attributes.html#html-attributes)
Expects | Default Value
--- | ---
`any` | `n/a`
- __`component`__ · overrides the HTML container tag
Expects | Default Value
--- | ---
`string` | `'div'`
- __`hidePlaceholderOnFocus`__ · triggers the placeholder to disappear when the input field is focused, reappears when the user has tabbed away or focus is moved
Expects | Default Value
--- | ---
`bool` | `true`
- __`inputProps`__
Expects | Default Value
--- | ---
`object` | `{ type: 'text' }`
## Reference Styles
### Stylus
You can see what variables are available to override in [variables.styl](https://github.com/enigma-io/boundless/blob/master/variables.styl).
```stylus
// Redefine any variables as desired, e.g:
color-accent = royalblue
// Bring in the component styles; they will be autoconfigured based on the above
@require "node_modules/boundless-input/style"
```
### CSS
If desired, a precompiled plain CSS stylesheet is available for customization at `/build/style.css`, based on Boundless's [default variables](https://github.com/enigma-io/boundless/blob/master/variables.styl).
| 30.625 | 289 | 0.619274 | eng_Latn | 0.943765 |
1df04bcecbd476802c905fc7227dd88dc3a4f6b5 | 5,162 | md | Markdown | _posts/2021-08-06-Batch-Normalization.md | OKR9871/OKR9871.github.io | 207edd1d5d233cd7db16f42c32d91be937c32f71 | [
"MIT"
] | 1 | 2021-08-03T15:55:20.000Z | 2021-08-03T15:55:20.000Z | _posts/2021-08-06-Batch-Normalization.md | OKR9871/OKR9871.github.io | 207edd1d5d233cd7db16f42c32d91be937c32f71 | [
"MIT"
] | null | null | null | _posts/2021-08-06-Batch-Normalization.md | OKR9871/OKR9871.github.io | 207edd1d5d233cd7db16f42c32d91be937c32f71 | [
"MIT"
] | null | null | null | ---
title: "[Paper Review] Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift"
excerpt: "Batch Normalization Paper Review"
categories:
- Paper_Review
tags:
- [Blog, jekyll, Github, Git]
toc: true
toc_sticky: true
date: 2021-08-06
last_modified_at: 2021-08-06
---
# Batch Normalization
## Abstract
깊은 층의 Neural Network를 학습시키는 것은 각 layer에 들어가는 input의 분포가 훈련중 변화기 때문에 어렵습니다.
또한 낮은 learning rate, parameter의 초기화, non-linearity의 감소와 같이 여러 문제점이 존재하는데 이를 'internal covariate shift'라고 합니다.
논문에서 제시하는 배치정규화는 더 높은 learning rate를 사용하게 해주고, 초기화에 덜 민감하고, 규제의 역할도 수행한다고 합니다.
## 1. Introduction
딥러닝은 여러 분야에서 발전해 왔습니다. 그중에서 SGD를 포함한 다양한 방법은 이러한 deep network의 효과적은 학습을 가능하게 해주었습니다.
SGD는 parameter $\theta$를 loss값이 최소가 되도록 최적화 해주는 방법입니다. 식으로 적으면 아래와 같습니다.
$\theta = argmin{1\over N}\sum\limits_{i=1}^{N}l(x_{i},\theta)$
전체 N개의 training set이 있을때 SGD는 훈련중에는 mini-batch를 사용합니다. mini-batch에 대한 gradient계산은 다음과 같습니다.
${1\over m}{\partial l(x_{i}, \theta)\over\partial\theta}$
이렇게 mini-batch를 사용하는데는 여러 장점이 있습니다. 첫번째 장점은 loss함수의 gradient는 추정치이지만 batch의 사이즈에 따라서 더 높은 정확서을 보여줄 수 있습니다. 둘째로 병렬처리가 가능한 현재 기술로 m개의 batch로 나누어서 처리하는 것이 계산상의 효율을 높이기 때문입니다.
이렇게 SGD를 사용하는 방법이 간단하면서도 효과적이지만 단점이 존재합니다.
단점으로는 많은 hyper parameter의 조정을 요구합니다. 또한 훈련중 input값의 많은 변화로인해 layer의 paremeter들이 많은 영향을 받습니다. 이러한 문제를 이전에 'covariate shift'라고 부리기로 했습니다.
'covariate shift'문제는 domain adaptation으로 해결하고자 합니다. 하지만 이 문제는 input layer뿐만아니라 훈련중 모든 과정에 거쳐서 나타나게 됩니다. 예를 들면 $l = F_{2}(F_{1}(u,\theta_{1}), \theta_{2})$를 network가 계산하고 $F_{1}, F_{2}$가 임의 변환함수라 할때, parameter $\theta_{1},\theta_{2}$를 $l$을 최소화 하도록 학습시켜주어야 합니다.
이대 위 식을 두부분으로 분리해서 생각하면 $x=F_{1}(u,\theta_{1})$라 하고 이를 $F_{2}$의 입력으로 넣을 수 있습니다. 이때 입력으로 들어오는 값이 고정된 분포를 가지고 있다면 sub-network의 외부에서는 긍정적인 결과를 보여줍니다. 하지만 이렇게 통과한 값이 activation function을 거쳐가면서 값은 점차 작아지고, gradient또한 작아지는 모습을 보이면서 사라지게 되는 gradient vaninshing문제가 발생하기도 합니다. 이러한 문제는 네트워크의 층이 깊어질 수록 심해집니다.
이런문제를 해결하기 위해 현재는 ReLU함수의 사용, parameter초기화 방법, 작은 learning rate의 사용으로 해결하고 있습니다. 하지만 각 layer의 input의 분포가 학습중에 안정된다면 위의 문제를 해결하는데 도움이 될것입니다.
이 논문에서 제시하는 Batch Normalization(배치 정규화)는 'internal covariate shift'문제를 해결하는데 도움을 줍니다. 이를 위해서 input의 고정된 평균, 분산을 이용해 분포를 맞춰줍니다. 또한 gradient flow가 초기값에 예민한 문제를 해결하였고, 높은 learning rate의 사용을 가능하게 하였습니다. 또한 saturated 구간에 빠지는 것을 막아줍니다.
## 2. Towards Reducing Internal Covariate Shift
우리는 'internal covariate shift'를 학습중 parameter의 변화에 따라 활성값의 분포가 변화하는 문제로 정의하였습니다.
각 layer의 input의 분포를 맞춰준다면 학습을 더 효율적이고 빠르게 할 수 있습니다. 이러한 방법은 오래전에 whitening이라는 방법을 사용하고 있었습니다.
이 whitening이라는 방법은 평균을 0, 분산을 일정하게 맞춰주고 decorrelation하게 함으로써 정규화를 진행해주는 방법입니다. 하지만 이러한 변형이 최적화 과정중에 들어가게 된다면, gradient descent 과정에서 parameter를 업데이트하는데 gradient의 효과가 없어지게 됩니다. 예를 들면, bias b를 추가해서 정규화와 학습을 진행하는 경우 b에 대한 gradient값은 계산중에 사라지게 되어 학습이 올바르게 되지 않습니다. 또한 whitening의 경우 계산시에 covariance matrix의 계산을 진행하는데 이는 계산량이 많아져 비효율적인 문제를 발생시킵니다. 또한 역전파를 위한 covariance matrix의 역함수 게산도 많은 계산을 요구하기 때문에 비효율적입니다. 이러한 문제들 때문에 필자는 새로운 normalize를 위한 방법을 생각했습니다.
## 3. Normalization via Mini-Batch Statistics
이 논문에서는 input과 output의 feature에 대해 whitening을 적용하는것 대신에 평균을 0으로, 분산을 1로 정규화 과정을 진행하였습니다.
$\hat{x}={x^{(k)}-E[x^{(k)}]\over\sqrt{Var[x^{(k)}]}}$
이렇게 normalizing함으로써 feature에 변화가 일어날 수 있습니다.
![2021-08-06-1](https://user-images.githubusercontent.com/55619678/128522608-669ba101-1f2b-41f9-a40d-ca38e3d05884.PNG)
예를들면 위 그림과 같이 sigmoid 함수에 input으로 들어가면 비선형함수이지만 비선형성을 제한하는 문제가 발생할 수도 있습니다. 따라서 $\gamma^{k},\beta^{k}$와 같은 파라미터를 도입해서 해결하였습니다.
$y^{(k)}=\gamma^{(k)}\hat{x}^{(k)}+\beta^{(k)}$
이 parameter들은 학습중에 다른 parameter와 같이 학습됩니다.
학습되어지기는 하지만 $\gamma^{k} = \sqrt{Var[x^{k}]},\beta^{k}=E[x^{k}]$로 설정하면 원래의 activation function을 복원할 수 있습니다.
전체 training set을 사용하는것은 비효율적일 수 있습니다. 따라서 mini-batch를 사용하는데 평균과 분산을 계산할때 또한 각 mini-batch의 분산과 평균을 이용해서 정규화를 진행합니다.
다음은 batch normalization을 진행하는 algorithm입니다.
![2021-08-06-2](https://user-images.githubusercontent.com/55619678/128522610-1bf7fac6-6102-4615-a496-5bc0e931b6d0.png)
역전파 계산을 하귕해서 loss에 대한 gradient를 계산할때는 chain rule을 이용해서 계산해줍니다.
![2021-08-06-3](https://user-images.githubusercontent.com/55619678/128524111-36e93d2f-343a-42a4-9e23-3b3052791242.png)
위의 식과 같이 기존의 역전파 계산과 같이 loss값을 필요한 parameter들로 편미분하여 gradient를 계산해줍니다.
이제 위에서 설명한것을 모두 합하여 training과정을 알고리즘으로 나타낸 것을 살펴보겠습니다.
![2021-08-06-4](https://user-images.githubusercontent.com/55619678/128524105-078a4715-2b30-4e3a-800c-d61479effeda.png)
학습은 기존의 방식과 같이 mini-batch를 이용해서 순전파와 역전파단계를 거쳐서 학습을 진행하고 validation 단계에서는 모든 파라미터는 고정시킨채로 deterministic한 결과를 뽑아내야하기에 moving average를 이용해 training 과정에서의 미니 배치를 뽑으면서 sample mean, sample variance를 구해서 이를 고정으로 사용합니다.
Batch normalization은 주로 non-linear activation fucntion바로 앞에 위치하며 이렇게 함으로써 출력값이 안정된 분포를 가지게 해줍니다.
## 5. Conclusion
이 논문에서 제시한 방법은 deep network의 학습을 월등히 빠르고 효율적으로 가능하게 합니다. 'internal covariate shift'문제에 기반해 학습이 잘 되지않는 문제를 해결하였습니다. 또한 오직 두개의 parameter만을 추가하여 계산의 복잡성을 높이지 않아 network의 성능을 그대로 유지하였으며 dropout을 사용하지 않고도 규제의 효과를 낼 수 있었습니다.
이 논문을 읽으면서 Batch Normalization의 아이디어가 나온 원인을 알 수 있었으며 간단한 방법을 통해 학습의 효과를 높이 끌어올릴 수 있다는 점을 들어 현재까지도 많은 network에서 사용중인 이유를 이해할 수 있었습니다. | 86.033333 | 452 | 0.744091 | kor_Hang | 1.00001 |
1df20412409f665eb4ea87cbe00d06449294bba8 | 185 | md | Markdown | code/TemplateStudioForWinUICs/Templates/_comp/MT/Project.Blank/README_postaction.md | Microsoft/WindowsTemplateStudio | e8ef6ce3a97939c3ad7a15d5b3471feb96ae1983 | [
"MIT"
] | 1,558 | 2017-05-11T04:53:27.000Z | 2019-05-06T18:43:37.000Z | code/TemplateStudioForWinUICs/Templates/_comp/MT/Project.Blank/README_postaction.md | Microsoft/WindowsTemplateStudio | e8ef6ce3a97939c3ad7a15d5b3471feb96ae1983 | [
"MIT"
] | 2,007 | 2017-05-10T20:30:42.000Z | 2019-05-01T21:21:48.000Z | code/TemplateStudioForWinUICs/Templates/_comp/MT/Project.Blank/README_postaction.md | Microsoft/WindowsTemplateStudio | e8ef6ce3a97939c3ad7a15d5b3471feb96ae1983 | [
"MIT"
] | 308 | 2017-05-11T06:54:27.000Z | 2019-05-06T18:42:09.000Z | ### Project type
//{[{
This app is a blank project, for more information see [blank docs](https://github.com/microsoft/TemplateStudio/blob/main/docs/WinUI/projectTypes/blank.md).
//}]} | 46.25 | 155 | 0.735135 | eng_Latn | 0.625206 |
1df27faa812816411d5f42daadb0243458260dec | 764 | md | Markdown | README.md | pengtianabc/pcap_index | edfb15bb940e46816691a657756d38ec482fbac7 | [
"MIT"
] | null | null | null | README.md | pengtianabc/pcap_index | edfb15bb940e46816691a657756d38ec482fbac7 | [
"MIT"
] | null | null | null | README.md | pengtianabc/pcap_index | edfb15bb940e46816691a657756d38ec482fbac7 | [
"MIT"
] | null | null | null | # pcap_index
## TODO
### for search a flow
Add `reverse` bit index to normalize `src`/`dst` field for a flow, assume the variabe bitmap is `reverse`
a simple icmp flow with 2 pkt is:
(1) `sip=>dip`
(2) `sip<=sip`
save it to:
(1) `s=sip, d=dip, reverse=0`
(2) `s=sip, d=dip, reverse=0`
- if we search the first packet, expression is `s=sip, d=dip, reverse=0`, will find the fist packet
- if we search the whole flow, expresssion may be (`s=sip, d=sip, reverse=0`) || (`s=sip, d=sip, reverse=1`), equal as: `s=sip, d=dip`, so we can find the whole flow, instead use (`sip=sip, d=dip`) or (`s=dip, d=sip`) in common bpf expression
- `reverse` is very important, the program should keep it correct, otherwise, the search condition is inversed
| 31.833333 | 244 | 0.664921 | eng_Latn | 0.985506 |
1df35466023c197745934fa73b95e97064c33f4f | 2,721 | md | Markdown | README.md | hosamazzam/CustomViews | 378ae56e015b4ed8773d5f7ab7ece5a278907bba | [
"MIT"
] | 1 | 2019-05-15T18:24:40.000Z | 2019-05-15T18:24:40.000Z | README.md | hosamazzam/CustomViews | 378ae56e015b4ed8773d5f7ab7ece5a278907bba | [
"MIT"
] | null | null | null | README.md | hosamazzam/CustomViews | 378ae56e015b4ed8773d5f7ab7ece5a278907bba | [
"MIT"
] | 2 | 2019-06-18T14:01:52.000Z | 2019-06-19T14:10:09.000Z | # CustomViews
[![](https://jitpack.io/v/hosamazzam/CustomViews.svg)](https://jitpack.io/#hosamazzam/CustomViews)
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/hosamazzam/CustomViews/issues)
## Synopsis
CustomViews is library that support customize view like listView, gridView, imageView and textView
## Motivation
sometime you want to insert gridview or list view in scrollview but you can't do that is they not expanded as it has built in scrollview so i made this views tocatre non scroll view for this two views
For textview you can't add fontface for text in xml but in runtime but with this customfonttextview you can do that also if want to make your imageView rounded from corners, Sound awesome right!
## Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
### Installing
To get a Git project into your build:
Step 1. Add the JitPack repository to your build file (Gradle)
Add it in your root build.gradle at the end of repositories:
allprojects {
repositories {
...
maven { url "https://jitpack.io" }
}
}
Step 2. Add the dependency in app/build.gradle
dependencies {
implementation 'com.github.hosamazzam:CustomViews:v1.0.5'
}
## Built With
* [Gradle](https://gradle.org/) - Build Tool
* [JitPack](https://jitpack.io/) - Publish your JVM and Android libraries
## Code Examples
[NonScrollGridView] tag in xml file
```
<com.hosamazzam.customviews.NonScrollGridView
android:id="@+id/grid_listview"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
```
[NonScrollListView] tag in xml file
```
<com.hosamazzam.customviews.NonScrollListView
android:id="@+id/custom_listview"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
```
[RoundCornerImageView] tag in xml file
```
<com.hosamazzam.customviews.RoundCornerImageView
android:id="@+id/custom_imageview"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:cornerRadius="20.0"/>
```
[CustomFontTextView] tag in xml file
Notes : font file should put in src\main\assets/fonts/MyFont.ttf
```
<com.hosamazzam.customviews.CustomFontTextView
android:id="@+id/custom_textview"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:fontName="MyFont.ttf"/>
```
## License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
| 30.573034 | 200 | 0.737596 | eng_Latn | 0.82054 |
1df6f99cd4f4414527a84b07765f8bb13c1f8b6d | 867 | md | Markdown | README.md | TianyiShi2001/lubridate-rs | 84b0b0ac956e1df75b9a4686c17efa61ff55db88 | [
"MIT"
] | null | null | null | README.md | TianyiShi2001/lubridate-rs | 84b0b0ac956e1df75b9a4686c17efa61ff55db88 | [
"MIT"
] | null | null | null | README.md | TianyiShi2001/lubridate-rs | 84b0b0ac956e1df75b9a4686c17efa61ff55db88 | [
"MIT"
] | null | null | null | # toki
Make working with dates in Rust just that little bit easier
## Why?
I was spoiled by the [**lubridate**](https://github.com/tidyverse/lubridate) package in the R world and found it can be pain to work with datetimes in Rust, especially when it comes to arithmatics and durations.
**toki** aims to makes datetime arithmatics simple and reliable. It does this by introducing three new time span classes borrowed from [**lubridate**](https://github.com/tidyverse/lubridate), which are, well, borrowed from [**joda**](http://joda.org).
- `durations`, which measure the exact amount of time between two points
- `periods`, which accurately track clock times despite leap years, leap seconds, and day light savings time
- `intervals`, a protean summary of the time information between two points
## About the name
**toki** is the word for "time" in Japanese. | 51 | 251 | 0.755479 | eng_Latn | 0.999328 |
1df98993da4974b5bdafbc58e623adcf5e7c6824 | 136 | md | Markdown | README.md | BukkitGerman/ruby_webserver | 2d8b5dd3fb4c99ac475617ef9f486b4e00a33196 | [
"MIT"
] | null | null | null | README.md | BukkitGerman/ruby_webserver | 2d8b5dd3fb4c99ac475617ef9f486b4e00a33196 | [
"MIT"
] | null | null | null | README.md | BukkitGerman/ruby_webserver | 2d8b5dd3fb4c99ac475617ef9f486b4e00a33196 | [
"MIT"
] | null | null | null | # ruby_webserver
A Webserver based on Ruby::Sinatra. That contains a function to generate a Dynamic Navigation bar for Webapplications.
| 45.333333 | 118 | 0.823529 | eng_Latn | 0.972458 |
1dfa0ea8d59e7f6d69267855716a9389acad96e8 | 272 | md | Markdown | single_use_scripts/README.md | timgianitsos/ancient_greek_genre_classification | 1729ef783d3feea30730a3223a4bc6b2a4488cb8 | [
"MIT"
] | null | null | null | single_use_scripts/README.md | timgianitsos/ancient_greek_genre_classification | 1729ef783d3feea30730a3223a4bc6b2a4488cb8 | [
"MIT"
] | null | null | null | single_use_scripts/README.md | timgianitsos/ancient_greek_genre_classification | 1729ef783d3feea30730a3223a4bc6b2a4488cb8 | [
"MIT"
] | 2 | 2020-01-22T19:43:08.000Z | 2020-01-22T19:44:57.000Z | Most of these scripts were made under the assumption that they would be run from the root directory of this project, therefore all the paths are relative to the root directory of this project.
This means to run them, move them to the root directory of the project first.
| 68 | 192 | 0.801471 | eng_Latn | 1.000006 |
1dfa26ca6e202bd421d1ecf4ccad614369b52913 | 22,227 | md | Markdown | articles/guidance/guidance-multitenant-identity-keyvault.md | OpenLocalizationTestOrg/azure-docs-pr15_fr-BE | 753623e5195c97bb016b3a1f579431af9672c200 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/guidance/guidance-multitenant-identity-keyvault.md | OpenLocalizationTestOrg/azure-docs-pr15_fr-BE | 753623e5195c97bb016b3a1f579431af9672c200 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/guidance/guidance-multitenant-identity-keyvault.md | OpenLocalizationTestOrg/azure-docs-pr15_fr-BE | 753623e5195c97bb016b3a1f579431af9672c200 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="À l’aide de la clé de stockage en chambre forte pour protéger la confidentialité de l’application | Microsoft Azure"
description="Comment une utilisation le coffre-fort de la clé de service pour stocker des secrets de l’application"
services=""
documentationCenter="na"
authors="MikeWasson"
manager="roshar"
editor=""
tags=""/>
<tags
ms.service="guidance"
ms.devlang="dotnet"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.workload="na"
ms.date="02/16/2016"
ms.author="mwasson"/>
# <a name="using-azure-key-vault-to-protect-application-secrets"></a>À l’aide d’Azure, clé de chambre forte pour protéger la confidentialité de l’application
[AZURE.INCLUDE [pnp-header](../../includes/guidance-pnp-header-include.md)]
Cet article fait [partie d’une série]. Il existe également un [exemple d’application] complète qui accompagne cette série.
## <a name="overview"></a>Vue d’ensemble
Il est courant d’avoir des paramètres d’application sont sensibles et doivent être protégées, telles que :
- Chaînes de connexion de base de données
- Mots de passe
- Clés de chiffrement
Pour des raisons de sécurité, vous ne devez jamais stocker ces secrets dans le contrôle de code source. Il est trop facile de se perdre — même si votre référentiel de code source est privé. Et il n'est pas simplement le secret à partir de la public. Dans les projets plus importants, vous souhaiterez peut-être déterminer quels développeurs et opérateurs peuvent accéder les secrets de fabrication. (Pour les environnements de test ou de développement, les paramètres sont différents).
Une option la plus sûre est de stocker ces secrets dans [Azure clé coffre-fort][KeyVault]. Clé de stockage en chambre forte est un service hébergé sur le nuage pour la gestion des clés de chiffrement et d’autres données secrètes. Cet article explique comment utiliser le coffre-fort de la clé pour stocker des paramètres de configuration de l’application vous.
Dans les [Enquêtes de Tailspin] [ Surveys] application, les paramètres suivants sont secrètes :
- La chaîne de connexion de base de données.
- La chaîne de connexion Redis.
- Le secret de client de l’application web.
Pour stocker des secrets de configuration dans le coffre-fort de la clé, l’application d’enquêtes implémente un fournisseur de configuration personnalisé, qui raccorde le [système de configuration]ASP.NET Core 1.0[configuration]. Le fournisseur personnalisé lit les paramètres de configuration à partir de la clé de stockage en chambre forte au démarrage.
L’application d’enquêtes charge les paramètres de configuration à partir des emplacements suivants :
- Le fichier appsettings.json
- [Stockent des secrets de l’utilisateur] [ user-secrets] (environnement de développement pour les tests uniquement ;)
- L’environnement d’hébergement (paramètres de l’application dans les applications web Azure)
- Chambre forte de clé
Chacun de ces substitutions le, afin que les paramètres stockés dans la clé de stockage en chambre forte sont prioritaires.
> [AZURE.NOTE] Par défaut, le fournisseur de coffre-fort de la clé de configuration est désactivé. Il n’est pas nécessaire pour exécuter l’application localement. Vous devriez l’autoriser dans un déploiement de production.
> Le fournisseur de stockage en chambre forte de clé est actuellement pas pris en charge dans .NET, car elle nécessite la [Microsoft.Azure.KeyVault] [ Microsoft.Azure.KeyVault] package.
Au démarrage, l’application lit les paramètres de chaque fournisseur de configuration enregistrés et les utilise pour remplir un objet fortement typé d’options. (Pour plus d’informations, consultez [utilisation des Options et des objets de configuration][options].)
## <a name="implementation"></a>Mise en œuvre
La [KeyVaultConfigurationProvider] [ KeyVaultConfigurationProvider] classe est un fournisseur de configuration qui se branche dans le [système de configuration]ASP.NET Core 1.0[configuration].
Pour utiliser le `KeyVaultConfigurationProvider`, appelez le `AddKeyVaultSecrets` méthode d’extension dans la classe de démarrage :
```csharp
var builder = new ConfigurationBuilder()
.SetBasePath(appEnv.ApplicationBasePath)
.AddJsonFile("appsettings.json");
if (env.IsDevelopment())
{
builder.AddUserSecrets();
}
builder.AddEnvironmentVariables();
var config = builder.Build();
// Add key vault configuration:
builder.AddKeyVaultSecrets(config["AzureAd:ClientId"],
config["KeyVault:Name"],
config["AzureAd:Asymmetric:CertificateThumbprint"],
Convert.ToBoolean(config["AzureAd:Asymmetric:ValidationRequired"]),
loggerFactory);
```
Notez que `KeyVaultConfigurationProvider` requiert certains paramètres de configuration, qui doivent être stockés dans un des autres sources de configuration.
Lorsque l’application démarre, `KeyVaultConfigurationProvider` énumère tous les secrets dans la chambre forte de clé. Pour chaque code secret, il recherche une étiquette nommée « ConfigKey ». La valeur de la balise est le nom du paramètre de configuration.
> [AZURE.NOTE] [Balises] [ key-tags] sont stockées avec une clé des métadonnées facultatives. Les balises sont utilisés ici, car les noms de clés ne peut pas contenir les caractères deux-points ( :).
```csharp
var kvClient = new KeyVaultClient(GetTokenAsync);
var secretsResponseList = await kvClient.GetSecretsAsync(_vault, MaxSecrets, token);
foreach (var secretItem in secretsResponseList.Value)
{
//The actual config key is stored in a tag with the Key "ConfigKey"
// because ':' is not supported in a shared secret name by Key Vault.
if (secretItem.Tags != null && secretItem.Tags.ContainsKey(ConfigKey))
{
var secret = await kvClient.GetSecretAsync(secretItem.Id, token);
Data.Add(secret.Tags[ConfigKey], secret.Value);
}
}
```
> [AZURE.NOTE] Voir [KeyVaultConfigurationProvider.cs].
## <a name="setting-up-key-vault-in-the-surveys-app"></a>Configuration de coffre-fort de clé dans l’application d’enquêtes
Conditions préalables :
- Installer les [Applets de commande Gestionnaire de ressources Azure][azure-rm-cmdlets].
- Configurer l’application d’enquêtes, comme décrit dans [l’exécution de l’application des enquêtes][readme].
Étapes principales :
1. Configurer un utilisateur admin dans le locataire.
2. Permet de paramétrer un certificat client.
3. Créer un coffre-fort de clé.
4. Ajouter des paramètres de configuration à votre coffre-fort de clé.
5. Ne commentez pas le code qui permet la chambre forte de clé.
6. Mettre à jour les secrets d’utilisateur de l’application.
### <a name="set-up-an-admin-user"></a>Configurer un utilisateur admin
> [AZURE.NOTE] Pour créer un coffre-fort de clé, vous devez utiliser un compte qui peut gérer votre abonnement Azure. En outre, toutes les applications que vous autorisez à lire à partir de la chambre forte de clé doit être enregistré dans le même client que ce compte.
Dans cette étape, vous vous assurerez que vous pouvez créer un coffre-fort de clé lors d’une connexion en tant qu’utilisateur dans le client où l’application d’enquêtes est enregistrée.
Tout d’abord, modifiez le répertoire associé à votre abonnement Azure.
1. Ouvrez une session sur le [portail de gestion Azure][azure-management-portal]
2. Cliquez sur **paramètres**.
![Paramètres](media/guidance-multitenant-identity/settings.png)
3. Sélectionnez votre abonnement Azure.
4. Cliquez sur **Modifier le répertoire** au bas du portail.
![Paramètres](media/guidance-multitenant-identity/edit-directory.png)
5. Dans « Modifier le répertoire associé », sélectionnez le locataire AD Azure où l’application d’enquêtes est enregistrée,
![Paramètres](media/guidance-multitenant-identity/edit-directory2.png)
6. Cliquez sur la flèche et complétez la boîte de dialogue.
Créer un utilisateur admin cliente AD Azure où est enregistrée l’application d’enquêtes.
1. Connectez-vous au [portail de gestion Azure][azure-management-portal].
2. Sélectionnez le locataire AD Azure où votre candidature est enregistrée.
3. Cliquez sur **utilisateurs** > **Ajouter utilisateur**.
4. Dans la boîte de dialogue **Ajouter un utilisateur** , affectez l’utilisateur au rôle d’administrateur Global.
Ajoutez l’utilisateur admin en tant qu’un administrateur de collaboration pour votre abonnement Azure.
1. Connectez-vous au [portail de gestion Azure][azure-management-portal].
2. Cliquez sur **paramètres** et sélectionnez votre abonnement Azure.
3. Cliquez sur **administrateurs**
4. Cliquez sur **Ajouter** au bas du portail.
5. Entrez une adresse de l’utilisateur admin que vous avez créé précédemment.
6. Cochez la case de l’abonnement.
7. Cliquez sur le bouton coche pour compléter la boîte de dialogue.
![Ajouter un administrateur de coproduit](media/guidance-multitenant-identity/co-admin.png)
### <a name="set-up-a-client-certificate"></a>Configurer un certificat client
1. Exécutez le script de PowerShell [/Scripts/Setup-KeyVault.ps1] [ Setup-KeyVault] comme suit :
```
.\Setup-KeyVault.ps1 -Subject <<subject>>
```
Pour le `Subject` paramètre, entrez un nom quelconque, par exemple « surveysapp ». Le script génère un certificat auto-signé et le stocke dans le magasin de certificats « utilisateur actuel/personnel ».
2. La sortie du script est un fragment JSON. Ajouter au manifeste d’application de l’application web, comme suit :
1. Connectez-vous au [portail de gestion Azure] [ azure-management-portal] et naviguez jusqu'à votre répertoire AD Azure.
2. Cliquez sur **Applications**.
3. Sélectionnez l’application d’enquêtes.
4. Cliquez sur **Gérer le manifeste** et sélectionnez **Télécharger le manifeste**.
5. Dans un éditeur de texte, ouvrez le fichier manifeste de JSON. Coller la sortie à partir du script dans le `keyCredentials` propriété. Il doit ressembler à ce qui suit :
```
"keyCredentials": [
{
"type": "AsymmetricX509Cert",
"usage": "Verify",
"keyId": "29d4f7db-0539-455e-b708-....",
"customKeyIdentifier": "ZEPpP/+KJe2fVDBNaPNOTDoJMac=",
"value": "MIIDAjCCAeqgAwIBAgIQFxeRiU59eL.....
}
],
```
6. Enregistrez vos modifications dans le fichier JSON.
7. Revenez au portail. Cliquez sur **Gérer le manifeste** > **Manifeste de téléchargement** et de télécharger le fichier JSON.
3. Ajoutez le même fragment JSON au manifeste d’application du site web API (Surveys.WebAPI).
4. Exécutez la commande suivante pour obtenir l’empreinte numérique du certificat.
```
certutil -store -user my [subject]
```
où `[subject]` est la valeur que vous avez spécifié pour le sujet dans le script PowerShell. L’empreinte numérique est répertorié sous « Certificat hachage (SHA1) ». Supprimez les espaces entre les nombres hexadécimaux.
Vous utiliserez ultérieurement l’empreinte numérique.
### <a name="create-a-key-vault"></a>Créer un coffre-fort de clé
1. Exécutez le script de PowerShell [/Scripts/Setup-KeyVault.ps1] [ Setup-KeyVault] comme suit :
```
.\Setup-KeyVault.ps1 -KeyVaultName <<key vault name>> -ResourceGroupName <<resource group name>> -Location <<location>>
```
Lorsque vous y êtes invité pour les informations d’identification, connectez-vous en tant que l’utilisateur AD Azure que vous avez créé précédemment. Le script crée un nouveau groupe de ressources et un nouveau coffre-fort clé au sein de ce groupe de ressources.
Remarque : pour du paramètre - Location, vous pouvez utiliser la commande PowerShell suivante pour obtenir la liste des régions valides :
```
Get-AzureRmResourceProvider -ProviderNamespace "microsoft.keyvault" | Where-Object { $_.ResourceTypes.ResourceTypeName -eq "vaults" } | Select-Object -ExpandProperty Locations
```
2. Exécutez SetupKeyVault.ps à nouveau, avec les paramètres suivants :
```
.\Setup-KeyVault.ps1 -KeyVaultName <<key vault name>> -ApplicationIds @("<<web app client ID>>", "<<web API client ID>>")
```
où
- nom de clé coffre-fort = le nom que vous avez donné le coffre-fort de clés à l’étape précédente.
- Web application client ID = l’ID client pour l’application web enquêtes.
- Web api client ID = l’ID client pour l’application Surveys.WebAPI.
Exemple :
```
.\Setup-KeyVault.ps1 -KeyVaultName tailspinkv -ApplicationIds @("f84df9d1-91cc-4603-b662-302db51f1031", "8871a4c2-2a23-4650-8b46-0625ff3928a6")
```
> [AZURE.NOTE] Vous pouvez obtenir le client ID à partir du [portail de gestion Azure][azure-management-portal]. Sélectionnez le locataire AD Azure, sélectionnez l’application et cliquez sur **configurer**.
Ce script autorise le web app et les API web pour extraire les secrets de votre coffre-fort de clé. Reportez-vous à la section [mise en route de la chambre forte de clé Azure] [ authorize-app] pour plus d’informations.
### <a name="add-configuration-settings-to-your-key-vault"></a>Ajouter des paramètres de configuration pour votre clé coffre-fort
1. Exécutez SetupKeyVault.ps comme suit :
```
.\Setup-KeyVault.ps1 -KeyVaultName <<key vault name> -KeyName RedisCache -KeyValue "<<Redis DNS name>>.redis.cache.windows.net,password=<<Redis access key>>,ssl=true" -ConfigName "Redis:Configuration"
```
où
- nom de clé coffre-fort = le nom que vous avez donné le coffre-fort de clés à l’étape précédente.
- Nom DNS de redis = le nom DNS de votre instance de cache Redis.
- Touche d’accès rapide de redis = la touche d’accès rapide pour votre instance de cache Redis.
Cette commande ajoute un secret pour clé vault. Le secret est une paire nom/valeur et une balise :
- Le nom de clé n’est pas utilisé par l’application, mais il doit être unique dans le coffre-fort de la clé.
- La valeur est la valeur de l’option de configuration, dans ce cas, la chaîne de connexion Redis.
- la balise « ConfigKey » conserve le nom de la clé de configuration.
2. À ce stade, il est judicieux de vérifier si vous avez enregistré avec succès les secrets de la chambre forte de clé. Exécutez la commande PowerShell suivante :
```
Get-AzureKeyVaultSecret <<key vault name>> RedisCache | Select-Object *
```
La sortie doit indiquer la valeur secrète ainsi que des métadonnées :
![Sortie de PowerShell](media/guidance-multitenant-identity/get-secret.png)
3. Exécutez SetupKeyVault.ps pour ajouter la chaîne de connexion de base de données :
```
.\Setup-KeyVault.ps1 -KeyVaultName <<key vault name> -KeyName ConnectionString -KeyValue <<DB connection string>> -ConfigName "Data:SurveysConnectionString"
```
où `<<DB connection string>>` est la valeur de la chaîne de connexion de base de données.
Pour tester la base de données local, copiez la chaîne de connexion à partir du fichier Tailspin.Surveys.Web/appsettings.json. Si vous procédez ainsi, veillez à modifier la double barre oblique ('\\\\') dans une seule barre oblique inverse. La double barre oblique inverse est un caractère d’échappement dans le fichier JSON.
Exemple :
```
.\Setup-KeyVault.ps1 -KeyVaultName mykeyvault -KeyName ConnectionString -KeyValue "Server=(localdb)\MSSQLLocalDB;Database=Tailspin.SurveysDB;Trusted_Connection=True;MultipleActiveResultSets=true" -ConfigName "Data:SurveysConnectionString"
```
### <a name="uncomment-the-code-that-enables-key-vault"></a>Les commentaires du code qui permet le stockage en chambre forte de clé
1. Ouvrez la solution Tailspin.Surveys.
2. Dans [Tailspin.Surveys.Web/Startup.cs][web-startup], localisez le bloc de code suivant, et commentaire.
```csharp
//#if DNX451
// _configuration = builder.Build();
// builder.AddKeyVaultSecrets(_configuration["AzureAd:ClientId"],
// _configuration["KeyVault:Name"],
// _configuration["AzureAd:Asymmetric:CertificateThumbprint"],
// Convert.ToBoolean(_configuration["AzureAd:Asymmetric:ValidationRequired"]),
// loggerFactory);
//#endif
```
3. Dans [Tailspin.Surveys.WebAPI/Startup.cs][web-api-startup], localisez le bloc de code suivant, et commentaire.
```csharp
//#if DNX451
// var config = builder.Build();
// builder.AddKeyVaultSecrets(config["AzureAd:ClientId"],
// config["KeyVault:Name"],
// config["AzureAd:Asymmetric:CertificateThumbprint"],
// Convert.ToBoolean(config["AzureAd:Asymmetric:ValidationRequired"]),
// loggerFactory);
//#endif
```
4. Dans [Tailspin.Surveys.Web/Startup.cs][web-startup], recherchez le code qui enregistre la `ICredentialService`. Décommentez la ligne qui utilise `CertificateCredentialService`et commentez la ligne qui utilise `ClientCredentialService`:
```csharp
// Uncomment this:
services.AddSingleton<ICredentialService, CertificateCredentialService>();
// Comment out this:
//services.AddSingleton<ICredentialService, ClientCredentialService>();
```
Cette modification permet à l’application web utiliser [l’assertion du Client] [ client-assertion] pour obtenir les jetons d’accès OAuth. Avec l’assertion du client, vous n’avez pas besoin un secret de client OAuth. Ou bien, vous pouvez stocker le secret du client dans la chambre forte de clé. Toutefois, clé coffre-fort et l’assertion du client qu'utilisent tous deux un client du certificat, si vous activez la clé coffre-fort, il est recommandé d’activer l’assertion du client ainsi.
### <a name="update-the-user-secrets"></a>Mettre à jour les secrets de l’utilisateur
Dans l’Explorateur de solutions, cliquez droit sur le projet Tailspin.Surveys.Web et sélectionnez **Gérer les Secrets des utilisateurs**. Dans le fichier secrets.json, supprimer le JSON existant et collez le texte suivant :
```
{
"AzureAd": {
"ClientId": "[Surveys web app client ID]",
"PostLogoutRedirectUri": "https://localhost:44300/",
"WebApiResourceId": "[App ID URI of your Surveys.WebAPI application]",
"Asymmetric": {
"CertificateThumbprint": "[certificate thumbprint. Example: 105b2ff3bc842c53582661716db1b7cdc6b43ec9]",
"StoreName": "My",
"StoreLocation": "CurrentUser",
"ValidationRequired": "false"
}
},
"KeyVault": {
"Name": "[key vault name]"
}
}
```
Remplacer les entrées [crochets] avec les valeurs correctes.
- `AzureAd:ClientId`: L’identifiant client de l’application d’enquêtes.
- `AzureAd:WebApiResourceId`: L’URI d’ID App que vous avez spécifié lorsque vous avez créé l’application Surveys.WebAPI dans Azure AD.
- `Asymmetric:CertificateThumbprint`: L’empreinte numérique du certificat que vous avez précédemment, lorsque vous avez créé le certificat client.
- `KeyVault:Name`: Le nom de votre coffre-fort de clé.
> [AZURE.NOTE] `Asymmetric:ValidationRequired`est faux car le certificat que vous avez créés précédemment n’a pas été signé par une autorité de certification racine (CA). Dans production, utiliser un certificat qui est signé par une autorité de certification racine et la valeur `ValidationRequired` sur true.
Enregistrez le fichier secrets.json mis à jour.
Ensuite, dans l’Explorateur de solutions, cliquez droit sur le projet Tailspin.Surveys.WebApi et sélectionnez **Gérer les Secrets des utilisateurs**. Supprimer l’existant JSON et collez le texte suivant :
```
{
"AzureAd": {
"ClientId": "[Surveys.WebAPI client ID]",
"WebApiResourceId": "https://tailspin5.onmicrosoft.com/surveys.webapi",
"Asymmetric": {
"CertificateThumbprint": "[certificate thumbprint]",
"StoreName": "My",
"StoreLocation": "CurrentUser",
"ValidationRequired": "false"
}
},
"KeyVault": {
"Name": "[key vault name]"
}
}
```
Remplacer les entrées [crochets] et enregistrez le fichier secrets.json.
> [AZURE.NOTE] Pour l’API de web, assurez-vous d’utiliser l’ID client pour l’application de Surveys.WebAPI, pas l’application des enquêtes.
<!-- Links -->
[authorize-app]: ../key-vault/key-vault-get-started.md/#authorize
[azure-management-portal]: https://manage.windowsazure.com/
[azure-rm-cmdlets]: https://msdn.microsoft.com/library/mt125356.aspx
[client-assertion]: guidance-multitenant-identity-client-assertion.md
[configuration]: https://docs.asp.net/en/latest/fundamentals/configuration.html
[KeyVault]: https://azure.microsoft.com/services/key-vault/
[KeyVaultConfigurationProvider]: https://github.com/Azure-Samples/guidance-identity-management-for-multitenant-apps/blob/master/src/Tailspin.Surveys.Configuration.KeyVault/KeyVaultConfigurationProvider.cs
[key-tags]: https://msdn.microsoft.com/library/azure/dn903623.aspx#BKMK_Keytags
[Microsoft.Azure.KeyVault]: https://www.nuget.org/packages/Microsoft.Azure.KeyVault/
[options]: https://docs.asp.net/en/latest/fundamentals/configuration.html#using-options-and-configuration-objects
[readme]: https://github.com/Azure-Samples/guidance-identity-management-for-multitenant-apps/blob/master/docs/running-the-app.md
[Setup-KeyVault]: https://github.com/Azure-Samples/guidance-identity-management-for-multitenant-apps/blob/master/scripts/Setup-KeyVault.ps1
[Surveys]: guidance-multitenant-identity-tailspin.md
[user-secrets]: http://go.microsoft.com/fwlink/?LinkID=532709
[web-startup]: https://github.com/Azure-Samples/guidance-identity-management-for-multitenant-apps/blob/master/src/Tailspin.Surveys.Web/Startup.cs
[web-api-startup]: https://github.com/Azure-Samples/guidance-identity-management-for-multitenant-apps/blob/master/src/Tailspin.Surveys.WebAPI/Startup.cs
[partie d’une série]: guidance-multitenant-identity.md
[KeyVaultConfigurationProvider.cs]: https://github.com/Azure-Samples/guidance-identity-management-for-multitenant-apps/blob/master/src/Tailspin.Surveys.Configuration.KeyVault/KeyVaultConfigurationProvider.cs
[exemple d’application]: https://github.com/Azure-Samples/guidance-identity-management-for-multitenant-apps
| 52.176056 | 491 | 0.742475 | fra_Latn | 0.915338 |
1dfa53acc794cbcd4f6314f9504b3696f23b9d2c | 914 | md | Markdown | container_files/public_html/doc/builtin/functions/NearestNeighborsFunction.md | kstepanmpmg/mldb | f78791cd34d01796705c0f173a14359ec1b2e021 | [
"Apache-2.0"
] | 665 | 2015-12-09T17:00:14.000Z | 2022-03-25T07:46:46.000Z | container_files/public_html/doc/builtin/functions/NearestNeighborsFunction.md | tomzhang/mldb | a09cf2d9ca454d1966b9e49ae69f2fe6bf571494 | [
"Apache-2.0"
] | 797 | 2015-12-09T19:48:19.000Z | 2022-03-07T02:19:47.000Z | container_files/public_html/doc/builtin/functions/NearestNeighborsFunction.md | matebestek/mldb | f78791cd34d01796705c0f173a14359ec1b2e021 | [
"Apache-2.0"
] | 103 | 2015-12-25T04:39:29.000Z | 2022-02-03T02:55:22.000Z | # Nearest Neighbors Function
The `embedding.neighbors` function type returns information about the nearest
neighbor rows in an existing ![](%%doclink embedding dataset) to an arbitrary point.
## Configuration
![](%%config function embedding.neighbors)
## Input and Output Values
Functions of this type have the following input values:
* `coords`: name of row for which to find neighbors, or embedding representing the point in space for which to find neighbors
* `num_neighbours`: optional integer overriding the function's default value if specified
* `max_distance`: optional double overriding the function's default value if specified
Functions of this type have the following output values:
* `neighbors`: an embedding of the rowPaths of the nearest neighbors in order of proximity
* `distances`: a row of rowName to distance for the nearest neighbors
## See also
* ![](%%doclink embedding dataset)
| 35.153846 | 125 | 0.780088 | eng_Latn | 0.999295 |
1dfb51b915233e1092fd6c7014095164132c4d3e | 631 | md | Markdown | docs/services/index.md | kkoppenhaver/docs | 566b013fbe4a64c1791f4a54a0786426f0ed7f6a | [
"MIT"
] | null | null | null | docs/services/index.md | kkoppenhaver/docs | 566b013fbe4a64c1791f4a54a0786426f0ed7f6a | [
"MIT"
] | null | null | null | docs/services/index.md | kkoppenhaver/docs | 566b013fbe4a64c1791f4a54a0786426f0ed7f6a | [
"MIT"
] | null | null | null | # Services
Tide consists of the following services:
* The [API](api.md) implements MySQL, PHP-FPM, and an Nginx web server with WordPress installed
serving a theme and a REST API.
* The [Sync Server](sync.md) polls the WordPress.org API’s for themes and plugins to process and writes them to a queue.
* The [PHPCS Server](phpcs.md) reads messages from a queue and runs reports against both plugins and themes, then sends the results back to the Tide API.
* The [Lighthouse Server](lighthouse.md) reads messages from a queue and runs Google Lighthouse reports against the themes only, then sends the results back to the Tide API. | 70.111111 | 173 | 0.776545 | eng_Latn | 0.993175 |
1dfbdec316fc966fae2ff5e0291597b23b62ff6b | 81 | md | Markdown | README.md | acceleratehk/git-deployment-test | 5c66ccaf1c4b81513e976cf22679465401ee35f5 | [
"Apache-2.0"
] | null | null | null | README.md | acceleratehk/git-deployment-test | 5c66ccaf1c4b81513e976cf22679465401ee35f5 | [
"Apache-2.0"
] | null | null | null | README.md | acceleratehk/git-deployment-test | 5c66ccaf1c4b81513e976cf22679465401ee35f5 | [
"Apache-2.0"
] | null | null | null | # git-deployment-test
Git Deployment Test is a test for testing git deployment.
| 20.25 | 57 | 0.790123 | eng_Latn | 0.943907 |
1dfc153019902cff0cf9b19a6e9d4459db6fa6be | 2,066 | md | Markdown | _pages/about.md | SeKwonLee/sekwonlee.github.io | 92adf22062e53090eee676378c027a60b9c1d99b | [
"MIT"
] | null | null | null | _pages/about.md | SeKwonLee/sekwonlee.github.io | 92adf22062e53090eee676378c027a60b9c1d99b | [
"MIT"
] | null | null | null | _pages/about.md | SeKwonLee/sekwonlee.github.io | 92adf22062e53090eee676378c027a60b9c1d99b | [
"MIT"
] | 1 | 2021-07-14T04:39:43.000Z | 2021-07-14T04:39:43.000Z | ---
permalink: /
title: "About me"
excerpt: "About me"
author_profile: true
redirect_from:
- /about/
- /about.html
---
### Life inspired by love and guided by knowledge - Bertrand Russell
I am a Ph.D. student in the Department of Computer Science at The University of Texas at Austin,
working with Prof. [Vijay Chidambaram](http://www.cs.utexas.edu/~vijay/) in the
[UT Systems and Storage Lab](http://utsaslab.cs.utexas.edu/) and the [LASR](https://www.cs.utexas.edu/lasr/)
research group. I am also closely working with [Kimberly Keeton](https://scholar.google.co.kr/citations?user=wR_tv-kAAAAJ&hl=en&oi=ao), [Sharad Singhal](https://scholar.google.co.kr/citations?user=_CKGpJ0AAAAJ&hl=en&oi=sra),
and [Marcos K. Aguilera](http://mkaguilera.kawazoe.org/).
I received my masters in Computer Science from [UNIST](https://www.unist.ac.kr/)
(Ulsan National Institute of Science and Technology), where I worked with Prof. [Sam H. Noh](http://next.unist.ac.kr/professor).
I am broadly interested in improving the performance and reliability of storage/file systems, operating systems,
distributed systems, and database systems. Especially, my recent studies focus on designing an index structure
and building a system based on Persistent Memory (PM). Currently, I am studying the design issues of persistent
datastore running on far memory architectures.
# News
* I did my internship at MSR Redmond this summer working with [Jonathan Goldstein](https://www.microsoft.com/en-us/research/people/jongold/).
* I was selected as a [2021 Microsoft Research PhD Fellow](https://www.microsoft.com/en-us/research/academic-program/phd-fellowship/#!fellows).
* [RECIPE](https://sekwonlee.github.io/publications/nvmw20_recipe) and [SplitFS](https://sekwonlee.github.io/publications/nvmw20_splitfs) will be presented at the 11th Annual Non-Volatile Memories Workshop (<b>NVMW 2020</b>).
* Two papers ([RECIPE](https://sekwonlee.github.io/publications/sosp19_recipe), [SplitFS](https://sekwonlee.github.io/publications/sosp19_splitfs)) were accepted to <b>SOSP 2019</b>.
| 66.645161 | 225 | 0.767183 | eng_Latn | 0.842724 |
1dfcdc9f6cc5f97a0df1c650ecfe65ed687a6411 | 267 | md | Markdown | _posts/articles/2012-06-03-.md | NicHub/hypnodingues.org | 2f526df959a67201b4347167fa86595803aa55df | [
"MIT"
] | null | null | null | _posts/articles/2012-06-03-.md | NicHub/hypnodingues.org | 2f526df959a67201b4347167fa86595803aa55df | [
"MIT"
] | null | null | null | _posts/articles/2012-06-03-.md | NicHub/hypnodingues.org | 2f526df959a67201b4347167fa86595803aa55df | [
"MIT"
] | 1 | 2016-02-23T19:15:11.000Z | 2016-02-23T19:15:11.000Z | ---
layout: post
title: ''
date: 2012-06-03 15:41:23.000000000 +02:00
categories:
- Divers
tags: []
status: draft
type: post
published: false
meta:
_edit_last: '2'
author:
first_name: Nico
excerpt:
---
<p>tomize. See <a href="http://ed.ted.com/">TED-Ed »</a></p>
| 14.833333 | 60 | 0.655431 | eng_Latn | 0.250575 |
38004a62479272b605be562cbb8d77595304c34e | 9,603 | md | Markdown | _posts/spring/2019-10-23-Spring Transaction.md | HUSTHuangKai/HUSTHuangKai.github.io | c03922aebbcbd6695fb2ba554d66a2196679325d | [
"MIT"
] | null | null | null | _posts/spring/2019-10-23-Spring Transaction.md | HUSTHuangKai/HUSTHuangKai.github.io | c03922aebbcbd6695fb2ba554d66a2196679325d | [
"MIT"
] | null | null | null | _posts/spring/2019-10-23-Spring Transaction.md | HUSTHuangKai/HUSTHuangKai.github.io | c03922aebbcbd6695fb2ba554d66a2196679325d | [
"MIT"
] | null | null | null | ---
title: Spring 中的事务管理
layout: post
categories: Java
tags: spring
---
* content
{:toc}
# Spring 中的事务管理
## 1 事务简介
事务管理是企业级应用程序开发中必不可少的技术, 用来确保数据的完整性和一致性.
事务就是一系列的动作, 它们被当做一个单独的工作单元. 这些动作要么全部完成, 要么全部不起作用
事务的四个关键属性(ACID)
- 原子性(atomicity): 事务是一个原子操作, 由一系列动作组成. 事务的原子性确保动作要么全部完成要么完全不起作用。
- 一致性(consistency): 一旦所有事务动作完成, 事务就被提交. 数据和资源就处于一种满足业务规则的一致性状态中
- 隔离性(isolation): 可能有许多事务会同时处理相同的数据, 因此每个事物都应该与其他事务隔离开来, 防止数据损坏
- 持久性(durability): 一旦事务完成, 无论发生什么系统错误, 它的结果都不应该受到影响. 通常情况下, 事务的结果被写到持久化存储器中.
## 2 事务管理的问题
- 必须为不同的方法重写类似的样板代码
- 这段代码是特定于 JDBC 的, 一旦选择类其它数据库存取技术, 代码需要作出相应的修改
![](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571735884261-1571811056946.png)
## 3 Spring中的事务管理
- 作为企业级应用程序框架, Spring 在不同的事务管理 API 之上定义了一个抽象层. 而应用程序开发人员不必了解底层的事务管理 API, 就可以使用 Spring 的事务管理机制。
- Spring 既支持编程式事务管理, 也支持声明式的事务管理。
- 编程式事务管理: 将事务管理代码嵌入到业务方法中来控制事务的提交和回滚. 在编程式管理事务时, 必须在每个事务操作中包含额外的事务管理代码。
- **声明式事务管理**: 大多数情况下比编程式事务管理更好用. 它将事务管理代码从业务方法中分离出来, 以声明的方式来实现事务管理. 事务管理作为一种横切关注点, 可以通过 AOP 方法模块化. Spring 通过 Spring AOP 框架支持声明式事务管理.
## 4 举例
### 4.1 数据库表
书
| isbn | book_name | price |
| ---- | --------- | ----- |
| 1 | Java | 50 |
| 2 | C++ | 55 |
库存
| isbn | stock |
| ---- | ----- |
| 1 | 10 |
| 2 | 10 |
用户账户
| id | user_name | balance |
| ---- | --------- | ------- |
| 1 | hk | 200 |
| 2 | ff | 300 |
配置文件
```xml
<!-- 配置事务管理器 -->
<bean id="dataSourceTransactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"></property>
</bean>
<!-- 启用事务注解 -->
<tx:annotation-driven transaction-manager="dataSourceTransactionManager"/>
```
服务
```java
public class BookStoreServiceImpl implements BookShopService {
@Autowired
BookShopDao bookShopDao;
@Transactional
@Override
// 用户买书事务,如果事务途中出现异常,则会回滚
public void purchase(int userid, String isbn) {
// TODO Auto-generated method stub
// 更新库存
bookShopDao.updateBookStock(isbn);
// 查询价格
double price = bookShopDao.findBookPriceByIsbn(isbn);
// 更新余额
bookShopDao.updateUserBalance(userid, price);
}
}
```
```java
@Repository
public class BookShopDaoImpl implements BookShopDao {
@Autowired
private JdbcTemplate jdbcTemplate;
// 查价格
@Override
public double findBookPriceByIsbn(String isbn) {
// TODO Auto-generated method stub
String sql = "SELECT price FROM book WHERE isbn = ?";
return jdbcTemplate.queryForObject(sql, Double.class, isbn);
}
// 查余额
@Override
public double findBalanceById(int id) {
// TODO Auto-generated method stub
String sql = "SELECT balance FROM accounts WHERE id = ?";
return jdbcTemplate.queryForObject(sql, Double.class, id);
}
// 查库存
@Override
public int findStockByIsbn(String isbn) {
// TODO Auto-generated method stub
String sql = "SELECT stock FROM book_stock WHERE isbn = ?";
return jdbcTemplate.queryForObject(sql, Integer.class, isbn);
}
@Override
public void updateBookStock(String isbn) {
// TODO Auto-generated method stub
int stock = findStockByIsbn(isbn);
// 如果库存不足则抛出异常
if (stock == 0) {
throw (new BookStockException("库存不足!"));
}
String sql = "UPDATE book_stock SET stock = stock - 1 WHERE isbn = ?";
jdbcTemplate.update(sql, isbn);
}
@Override
public void updateUserBalance(int id, double price) {
// TODO Auto-generated method stub
// 如果余额不足则抛出异常
double balance = findBalanceById(id);
if (balance < price) {
throw new UserBalanceException("余额不足!");
}
String sql = "UPDATE accounts SET balance = balance - ? WHERE id = ?";
jdbcTemplate.update(sql, price, id);
}
}
```
使用
```java
@Test
public void testBookShopService() {
bookShopService.purchase(1, "1");
}
```
## 5 事务的传播行为
- 当事务方法被另一个事务方法调用时, 必须指定事务应该如何传播. 例如: 方法可能继续在现有事务中运行, 也可能开启一个新事务, 并在自己的事务中运行.
- 事务的传播行为可以由传播属性指定. Spring 定义了 7 种类传播行为.
1. REQUIRED:如果有事务在运行,当前的方法就在这个事务内运行,否则,就启动一个新的事务,并在自己的事务内运行。
2. REQUIRED_NEW:当前的方法必须启动新的事务,并在它自己的事务内运行,如果有事务正在运行,应该将它挂起。
### 5.1 举例说明
仍以上面的书店例子为例:我们有一个买书的perchase(),该服务是一个单本购买服务。现在有一个新的需求,一个个顾客一次需要买多本书,也就是批量购买服务。批量购买服务会多次调用单本购买服务,那么问题来了,如果在一次批量购买中,已经完成了若干次单本购买,但是下一次单本购买出现了异常,那么应该回滚所有的购买事务,还是仅回滚当前购买失败的事务呢?换句话说,是一本都不买呢,还是能买多少买多少呢?这就是REQUIRED和REQUIRED_NEW的区别。默认为REQUIRED,表示如果当前方法整个是一个事务,如果回滚整个都回滚,也就是如果失败了则一本都不买。REQUIRED_NEW表示每个方法中的事务都启动一个新事务,失败时只回滚失败的那个事务,也就是能买多少,买多少。
1. 示例
```java
/*
* 结账服务
*/
@Service("cashier")
public class CashierImpl implements Cashier {
@Autowired
private BookShopService bookShopService;
/*
* 批量购买
*/
@Transactional
@Override
public void checkout(int userId, List<String> isbnList) {
// TODO Auto-generated method stub
for(String isbn : isbnList) {
bookShopService.purchase(userId, isbn);
}
}
}
```
当前账户余额和库存
![](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571810026459-1571811165897.png)
![](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571810026459-1571811181157.png)
2. 使用required
```java
@Service("bookShopService")
public class BookShopServiceImpl implements BookShopService {
@Autowired
BookShopDao bookShopDao;
@Transactional // 默认使用REQUIRED,不启用新事务
@Override
public void purchase(int userid, String isbn) {
// TODO Auto-generated method stub
bookShopDao.updateBookStock(isbn);
double price = bookShopDao.findBookPriceByIsbn(isbn);
bookShopDao.updateUserBalance(userid, price);
}
}
```
使用批量购买
```java
// 测试结账服务
@Test
public void testCashierService() {
List<String> isbnList = new LinkedList<String>();
isbnList.add("2");
isbnList.add("1");
// id为1的用户,购买isbn为1、2的书各一本
cashier.checkout(1, isbnList);
}
```
结果:
![](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571810187147-1571811243872.png)
![](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571810207238-1571811254506.png)
表示购买成功。
再次运行测试程序,id为1的用户,购买isbn为1、2的书各一本。
结果:
> com.husthuangkai.spring.tx.UserBalanceException: 余额不足!
> ....
出现此异常,查看余额和库存:
![1571811398122](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571811398122.png)
![1571811404881](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571811404881.png)
可见,默认情况下,使用REQUIRED参数,checkout()是一整个事务,purchase()不启用新事务,当发生异常时,checkout()回滚。
3. 使用REQUIRED_NEW
```java
@Service("bookShopService")
public class BookShopServiceImpl implements BookShopService {
@Autowired
BookShopDao bookShopDao;
/*
*使用 propagation指明事务的传播属性,默认为REQUIRED
*REQUIRED:使用调用方法的事务,当异常发生时,回滚调用方法
*REQUIRES_NEW:本方法要求启用新事务,当本方法发生异常时,仅回滚本事务
*/
// @Transactional // 默认使用REQUIRED,不启用新事务
@Transactional(propagation = Propagation.REQUIRES_NEW) // 指明使用REQUIRES_NEW,需要启用新事务
@Override
public void purchase(int userid, String isbn) {
// TODO Auto-generated method stub
bookShopDao.updateBookStock(isbn);
double price = bookShopDao.findBookPriceByIsbn(isbn);
bookShopDao.updateUserBalance(userid, price);
}
}
```
运行测试程序:
```java
// 测试结账服务
@Test
public void testCashierService() {
List<String> isbnList = new LinkedList<String>();
isbnList.add("2");
isbnList.add("1");
// id为1的用户,购买isbn为1、2的书各一本
cashier.checkout(1, isbnList);
}
```
结果:
> com.husthuangkai.spring.tx.UserBalanceException: 余额不足!
> ...
>
出现余额不足异常,查看余额和库存:
![1571813790298](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571813790298.png)
![1571813797605](Spring%20%E4%B8%AD%E7%9A%84%E4%BA%8B%E5%8A%A1%E7%AE%A1%E7%90%86.assets/1571813797605.png)
结果是购买了一本书,表明purchase()启动了新的事务,发生异常时,perchase()回滚,checkout()不回滚。
## 6 事务的其他属性
### 6.1 事务的隔离级别
使用isolation指定事务的隔离级别,最常用的是READ_COMMITED。
READ_COMMITED:读已提交
### 6.2 事务的回滚属性
默认情况下,Spring的声明式事务对所有的运行时异常进行回滚,也可以通过对应的属性进行设置,通常情况下使用默认值即可。
### 6.3 事务的只读属性
使用readOnly指定事务是否只读,表示这个事务只读取数据但不更新数据,这样可以帮助数据库引擎优化事务。
### 6.4 事务的超时属性
使用timeout指定事务的时间限制,如果超时则回滚。
## 7 使用配置文件配置事务管理器
```xml
<!-- 配置 bean -->
<bean id="bookShopDao" class="com.atguigu.spring.tx.xml.BookShopDaoImpl">
<property name="jdbcTemplate" ref="jdbcTemplate"></property>
</bean>
<bean id="bookShopService" class="com.atguigu.spring.tx.xml.service.impl.BookShopServiceImpl">
<property name="bookShopDao" ref="bookShopDao"></property>
</bean>
<bean id="cashier" class="com.atguigu.spring.tx.xml.service.impl.CashierImpl">
<property name="bookShopService" ref="bookShopService"></property>
</bean>
<!-- 1. 配置事务管理器 -->
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"></property>
</bean>
<!-- 2. 配置事务属性 -->
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<!-- 根据方法名指定事务的属性 -->
<tx:method name="purchase" propagation="REQUIRES_NEW"/>
<tx:method name="get*" read-only="true"/>
<tx:method name="find*" read-only="true"/>
<tx:method name="*"/>
</tx:attributes>
</tx:advice>
<!-- 3. 配置事务切入点, 以及把事务切入点和事务属性关联起来 -->
<aop:config>
<aop:pointcut expression="execution(* com.atguigu.spring.tx.xml.service.*.*(..))"
id="txPointCut"/>
<aop:advisor advice-ref="txAdvice" pointcut-ref="txPointCut"/>
</aop:config>
```
| 21.924658 | 334 | 0.683849 | yue_Hant | 0.86267 |
3801526614945fc8f024fce9347e9f4c7972a6bf | 17,696 | md | Markdown | README.md | noumannahmad/deform | 4212cac5682da27d63eb7583bf9a517664313eb4 | [
"MIT"
] | 15 | 2019-06-15T10:19:24.000Z | 2022-02-18T05:35:58.000Z | README.md | noumannahmad/deform | 4212cac5682da27d63eb7583bf9a517664313eb4 | [
"MIT"
] | 36 | 2019-06-24T14:32:04.000Z | 2022-01-16T21:16:09.000Z | README.md | simeks/deform | 3d6ab6f3c398ef18343b77c84874cec64b918e57 | [
"MIT"
] | 4 | 2019-06-25T20:01:28.000Z | 2021-08-29T17:33:47.000Z | # deform
deform is an implementation of an efficient graph-cut based method for dense deformable image registration. The original and the GPU accelerated method were introduced in the following papers:
* S. Ekström, F. Malmberg, H. Ahlström, J. Kullberg, and R. Strand, “Fast graph-cut based optimization for practical dense deformable registration of volume images,” Computerized Medical Imaging and Graphics, vol. 84, p. 101745, Sep. 2020, doi: [10.1016/j.compmedimag.2020.101745](https://doi.org/10.1016/j.compmedimag.2020.101745).
* S. Ekström, M. Pilia, J. Kullberg, H. Ahlström, R. Strand, and F. Malmberg, “Faster dense deformable image registration by utilizing both CPU and GPU,” J. Med. Imag., vol. 8, no. 01, Feb. 2021, doi: [10.1117/1.JMI.8.1.014002](https://doi.org/10.1117/1.JMI.8.1.014002).
If you make use of this software it would be appreciated if you cite these papers.
The method can be used either as a module through Python (recommended) or a standalone executable. Currently no pre-built binaries for the standalone executable are provided, but the Python module (excluding GPU support) can be installed through pip.
## Install
To download and install the pre-compiled Python module from pip:
```
pip install pydeform
```
Note: to enable GPU-supported registration you're required to compile the software yourself. See the section below.
## Building
### Prerequisites
* CMake : https://cmake.org/
Optional
* ISPC : https://ispc.github.io/
### Download
Retrieve the repository and associated dependencies by running
```
$ git clone https://github.com/simeks/deform.git
$ cd deform
$ git submodule update --init --recursive
```
### Python
```
# python setup.py install
```
Flags accepted by `setup.py`:
* `--use-cuda`: build with CUDA support
* `--use-ispc`: build with ISPC support
* `--use-itk`: build with ITK support
* `--debug`: build with debug symbols
Additional flags starting with `-D` are also recognised and forwarded to CMake. See [C++ section](#build_options) for available build options.
### C++
Use CMake (>=3.8) to generate build options of your own choosing.
If CMake cannot find the ISPC executable on your installation, it is possible
to hint the installation directory with `-DISPC_DIR_HINTS`, or to specify the
full path to the executable with `-DISPC_EXECUTABLE`.
#### <a name="build_options"></a>Build options
The build can be configured with the following CMake boolean options:
+ `DF_BUILD_TESTS`: Build unit tests (default: `OFF`)
+ `DF_BUILD_DOCS`: Build Sphinx docs (default: `OFF`)
+ `DF_BUILD_EXECUTABLE`: Build registration executable (default: `ON`)
+ `DF_BUILD_PYTHON_WRAPPER`: Build Python wrapper (default: `OFF`)
+ `DF_USE_CUDA`: Enable CUDA support (default: `OFF`)
+ `DF_USE_ISPC`: Enable ISPC support (default: `OFF`)
+ `DF_WARNINGS_ARE_ERRORS`: Warnings are treated as errors (default: `OFF`)
+ `DF_BUILD_WITH_DEBUG_INFO`: Include debug info in release builds (default: `OFF`)
+ `DF_ENABLE_FAST_MATH`: Enable fast math (default: `OFF`)
+ `DF_ITK_BRIDGE`: Add support to interoperate with ITK (default: `OFF`)
+ `DF_STACK_TRACE`: Print a stack trace on errors (default: `OFF`)
+ `DF_ENABLE_MICROPROFILE`: Enable `microprofile` profiler (default: `OFF`)
+ `DF_ENABLE_NVTOOLSEXT`: Enable `nvtoolsext` profiler (default: `OFF`)
# Run
## Examples
Examples on how to run the registration can be found in the [examples](https://github.com/simeks/deform/tree/development/examples) subfolder.
## Python
Everything needed for a simple registration setup is located in the `pydeform` package. The package provides two APIs; first uses `pydeform.Volume` for handling images and the second uses `SimpleITK`.
### pydeform API
This API uses `pydeform.Volume` which is a direct wrapper around the internal `Volume` class used within deform.
```python
import pydeform
fixed = pydeform.read_volume('fixed_file.nrrd')
moving = pydeform.read_volume('moving_file.nrrd')
affine_transform = pydeform.read_affine_transform('affine.txt')
settings = {
'pyramid_levels': 4
}
df = pydeform.register(
fixed,
moving,
settings=settings,
affine_transform=affine_transform
)
pydeform.write_volume('result.nrrd', df)
```
### SimpleITK API
[SimpleITK](http://www.simpleitk.org/) is a simplified layer built on top of [ITK](https://itk.org/) that provides a wide array of different filters and supports a larger variety of image formats compared to the `pydeform` API.
The API itself is similar to the `pydeform` API with the exception that it takes `SimpleITK.Image` as input for images and `SimpleITK.AffineTransform` as input for affine transforms. To use this API simply use `import pydeform.sitk_api as pydeform`.
```python
import SimpleITK as sitk
import pydeform.sitk_api as pydeform
fixed = sitk.ReadImage('fixed_file.nrrd')
moving = sitk.ReadImage('moving_file.nrrd')
affine_transform = sitk.ReadTransform('affine.txt')
settings = {
'pyramid_levels': 4
}
df = pydeform.register(
fixed,
moving,
settings=settings,
affine_transform=sitk.AffineTransform(affine_transform)
)
sitk.WriteImage(df, 'result.nrrd')
```
### Settings
The Python API provides the same [parameters](#registration_settings) as the command-line interface. However, rather the specifying the parameters in a YAML-document, the parameters are set by passing a `dict` object to the registration.
```python
settings = {
'pyramid_levels': 4,
'levels': {
'0': {'max_iteration_count': 20}
}
'image_slots': [
{'cost_function': 'ncc'}
]
}
pydeform.register(fixed, moving, settings=settings)
```
## Command-line
To perform a registration using the standalone executable
`deform registration -p <param file> -f0 <fixed_0> ... -f<i> <fixed_i> -m0 <moving_0> ... -m<i> <moving_i>`
| Argument | |
| --------------------------- | ------------------------------------------- |
| `-f<i> <file>` | Filename of the i:th fixed image.† |
| `-m<i> <file>` | Filename of the i:th moving image.† |
| `-fm <file>` | Filename of the fixed mask.‡ |
| `-mm <file>` | Filename of the moving mask.‡ |
| `-fp <file>` | Filename for the fixed landmarks. |
| `-mp <file>` | Filename for the moving landmarks. |
| `-d0 <file>` | Filename for initial deformation field. |
| `-a <file>` | Filename for initial affine transformation |
| `-constraint_mask <file>` | Filename for constraint mask. |
| `-constraint_values <file>` | Filename for constraint values. |
| `-rm <file>` | Filename for regularization weight map |
| `-p <file>` | Filename of the parameter file. |
| `-o <file>` | Filename of the resulting deformation field |
| `-j <file>` | Filename of the resulting jacobian map |
| `-t <file>` | Filename of the transformed moving volume |
| `--gpu` | Enables GPU assisted registration. |
† Requires a matching number of fixed and moving images.
‡ Fuzzy masks in floating point format, whose values denote the confidence on
the image intensity at each point.
### <a name="registration_settings"></a>Parameter file example
```yaml
pyramid_levels: 6
pyramid_stop_level: 0
solver: gco
update_rule: additive
constraints_weight: 1000.0
landmarks_weight: 1.0
landmarks_decay: 2.0
landmarks_stop_level: 0
block_size: [16, 16, 16]
block_energy_epsilon: 1e-7
max_iteration_count: -1
step_size: 0.5
regularization_weight: 0.1
regularization_scale: 1.0
regularization_exponent: 2.0
levels:
0:
regularization_weight: 0.1
1:
regularization_weight: 0.2
step_size: 0.01
image_slots:
# water
- resampler: gaussian
normalize: true
cost_function:
- function: ssd
weight: 0.3
- function: ncc
weight: 0.4
radius: 2
window: cube
- function: mi
weight: 0.6
sigma: 4.5
bins: 256
update_interval: 1
interpolator: nearest
- function: gradient_ssd
weight: 0.7
sigma: 1.0
# sfcm
- resampler: gaussian
normalize: true
cost_function: ssd
```
First two parameters, `pyramid_levels` and `pyramid_stop_level`, defines the
size of the pyramid and at which level to stop the registration. Each level
halves the resolution of the input volumes. Setting `pyramid_stop_level` to > 0
specifies that the registration should not be run on the original resolution
(level 0).
`solver` selects which solver to use for the energy minimization. Available
solvers are `gco`, `gridcut`, and `icm`. Note: deform needs to be compiled with
`DF_ENABLE_GCO` and `DF_ENABLE_GRIDCUT` for `gco` and `gridcut` to be enabled.
`update_rule` specifies the update rule for updating the displacement field.
This can be either `additive` or `compositive`. The first option updates the
displacement field according to `u(x) = u(x) + delta`, while the second uses
composition, i.e., `u(x) = u(x+delta) + delta`. Compositive updates should
produce a more diffeomorphic transformation. Note: compositive updates are
still considered an experimental feature.
`constraints_weight` sets the weight that is applied for constrained voxels. A
really high value means hard constraints while a lower value may allow
constraints to move a certain amount. Cost for constrained voxels are applied
as constraint_weight * squared_distance, where squared_distance is the distance
from the constraint target. See cost function for more info.
`landmarks_weight` sets the weight for the landmark cost term when performing
landmark-based registration. In order to perform landmark-based registration,
a set of fixed and moving landmarks must be supplied. The implementation of
the landmark-based unary energy term is inspired to [[2]](#2), but the cost in
each term of the sum is also proportional to the distance between the current
displacement and the landmark displacement. It is possible to limit the usage
of the landmarks up to a certain height of the resolution pyramid by assigning
to `landmarks_stop_level` a value greater than zero. `landmarks_decay` controls
the exponential decay of the landmarks effect with respect to distance in image
space: higher values correspond to faster decay.
`block_size` size of the block (in voxels) for the block-wise solver. A block
size of (0,0,0) will result in a single block for the whole volume.
`block_energy_epsilon`, minimum percentage decrease of the block energy
required to accept a solution. Higher epsilon will result in lower run time but
also lower quality.
`max_iteration_count`, maximum number of iterations run on each registration
level. Setting this to -1 (default) allows an unlimited number of iterations.
`step_size`, this is the step size in `mm` that the solver will use. Can be a
single `float` value, in that case the same step size will be used in all
directions, or a sequence `[sx, sy, sz]` of three `float` specifying the size
for each direction.
`regularization_weight`, `regularization_scale`, and `regularization_exponent`
control the importance of the regularization term. The cost function is
specified as `cost = D + a*((b*R)^c)`, where `D = Σw_i*C_i` is the data term
given by the cost functions `C_i` with weights `w_i`, `R` is the regularization
term, `a` is the regularization weight, `b` the regularization scale, and `c`
the regularization exponent.
`levels`, specifies parameters on a per-level basis. The key indicates which
level the parameters apply to, where 0 is the bottom of the resolution pyramid
(last level). The level identifier can not exceed `pyramid_levels`. Parameters
available on a per-level basis are: `constraints_weight`, `landmarks_weight`,
`block_size`, `block_energy_epsilon`, `max_iteration_count`, `step_size`, and
`regularization_weight`.
`image_slots`, specifies how to use the input images. `resampler` only supports
`gaussian` for now, `normalize` specifies whether the volumes should be
normalized before the registration, and `cost_function` allows to provide one
or more cost functions to use. Its value can be the name of a single function
(`ssd` for squared distance, `ncc` for normalized cross correlation, `mi` for
mutual information, `gradient_ssd` for squared distance of the gradients), in
which case its weight is assumed to be `1.0`, otherwise one or multiple
weighted components can be specified by listing each function and its weight.
Each function can accept a set of parameters.
The parameters available for each function are:
+ `ssd`: no parameters available
+ `ncc`:
+ `window` (`string`): shape of the correlation window, either `sphere` or
`cube` (default: `spere`). Note that `cube` is available only if the
program is built with ISPC support. For a given number of samples, the
sphere has a better spatial distribution of the samples, yielding a
slightly superior quality. When running on the CPU, for the same number
of samples (e.g., roughly, a sphere of radius `2` and a cube of radius
`1`) the cube can be significantly faster to compute.
+ `radius` (`int`): radius of the cross-correlation kernel (default: `2`).
For `window=sphere`, given a point where NCC is evaluated, samples are
taken in all the voxels such that the Euclidean distance of each sample
from the point is lesser or equal to `radius`. For `window=cube`,
samples are taken on all voxels within a cube centred on the point and
with side `2×radius + 1`.
+ `mi`:
+ `bins` (`int`): number of histogram bins used in the approximation of
probability densities (default: `255`)
+ `sigma` (`float`): standard deviation of the Gaussian kernel used to
approximate probability densities (default: `4.5`)
+ `update_interval` (`int`): interval (in iterations) between updates of the
entropy estimates (default: `1`). If `0`, updates are disabled.
+ `interpolator` (`'linear'` or `'nearest'`): interpolator used in the update
the entropy estimates (default: `'nearest'`)
+ `gradient_ssd`:
+ `sigma` (`float`): Gaussian smoothing applied to the images before
computing the Sobel operator (default: `0.0`)
### GPU
GPU assisted registration is supported on newer CUDA supported hardware. First
step to enable GPU registration is to compile with the `DF_USE_CUDA=1` flag,
this is set when generating the project with CMake. When both these
prerequisites are met, you simply add the `--gpu` flag to the command-line.
As for now the GPU implementation is considered a pre-release and not all cost
functions and features from the original registration implementation are
supported. Currently the only two supported cost functions are `ssd` and `ncc`.
### Logging
The file name for the log file can be specified through the environment
variable `DF_LOG_FILE`. The minimum level for log messages to be reported can
be set through the environment variable `DF_LOG_LEVEL`, and the possible values
are `Verbose`, `Info`, `Warning`, `Error`, and `Fatal`.
### Masks
It is possible to optionally specify fuzzy masks for the fixed and moving image
space. The two masks can be set independently, and it is possible to use no
mask, only one of the two (either fixed or moving) or both. The masks should be
given in floating point format, and they denote the level of confidence on
image intensity at each voxel. If the mask value `m(x, y, z)` at a certain
location `(x, y, z)` is lesser than or equal to zero, then samples taken at
that location will not contribute to the matching cost. If `m(x, y, z)` is
greater than zero, then the sample will contribute and its cost given at that
location by the image metric will by multiplied by `m(x, y, z)`.
The fixed mask allows to denote a ROI in reference space, formed by all voxels
with strictly positive mask values; for all samples outside such ROI the cost
function will not be computed at all, having the side effect of making the
registration process faster. If a sample belongs to a valid region, then its
mapping through the displacement will be computed and, if a mask for the moving
image is specified, the sample will contribute only if it falls within a valid
ROI in the moving image space, otherwise it will be discarded. The
regularisation term is not weighted by the masks, and it will be always
computed over all the volume, regardless of the mask values.
The moving mask should be used carefully because it can affect the quality of
the result, since there is no penalty for mapping from valid samples in
reference space to regions outside of the moving image mask.
### Regularization weight maps
It is possible to impose voxel-wise regularization weights in the registration
by providing a regularization weight map. This is a map of scalar values the
same size as the fixed image. In the standard case the binary term is computed
a `a|u(v)-u(w)|`, where `a` is the regularization weight. Using a regularization
weight map, the weight `a` is computed as `a = 0.5*(W(v) - W(w))`, where `W` is
the weight map.
## References
+ <a id="1"></a>[1] Junhwan Kim, Vladimir Kolmogorov, Ramin Zabih:
*Visual correspondence using energy minimization and mutual information.*
Proceedings of the Ninth IEEE International Conference on Computer Vision,
1033-1040, 2003.
+ <a id="2"></a>[2] Herve Lombaert, Yiyong Sun, Farida Cheriet:
*Landmark-based non-rigid registration via graph cuts*,
International Conference Image Analysis and Recognition, 166–175, 2007
| 43.160976 | 332 | 0.727 | eng_Latn | 0.993086 |
38017d34e4e08050f9196e859cb7b284215afb77 | 6,214 | md | Markdown | index.md | dgarijo/covid19 | d523daf313bc12e4edee04678954549c48b49386 | [
"Apache-2.0"
] | null | null | null | index.md | dgarijo/covid19 | d523daf313bc12e4edee04678954549c48b49386 | [
"Apache-2.0"
] | null | null | null | index.md | dgarijo/covid19 | d523daf313bc12e4edee04678954549c48b49386 | [
"Apache-2.0"
] | null | null | null | ---
layout: page
title: OEG-UPM COVID-19
subtitle: Aplicando técnicas de Inteligencia Artificial sobre el conjunto de artículos científicos relacionados con el COVID-19
use-site-title: true
---
### En este artículo contamos los esfuerzos que estamos haciendo desde el Ontology Engineering Group de la Universidad Politécnica de Madrid para intentar dar a la comunidad científica y terapéutica respuestas a algunas de sus preguntas, que pueden ser encontradas en la literatura científica que se ha publicado sobre el COVID-19
Desde hace unas semanas, el Allen Institute for Artificial Intelligence mantiene un [corpus actualizado de artículos científicos sobre COVID-19](https://pages.semanticscholar.org/coronavirus-research). A fecha de hoy (27 de marzo de 2020) este corpus contiene más de 44.000 artículos en inglés, con el texto completo de más de 29.000 artículos.
Esta ingente cantidad de literatura científica que se ha generado en apenas unos meses desde la aparición del virus demuestra la gran actividad que se ha generado desde el punto de vista científico para su estudio. Pero al mismo tiempo, es tan grande que se ha convertido en un problema para poder encontrar información específica sobre un tipo de tratamiento que se ha probado en algún grupo específico de la población, relaciones entre tratamientos, resultados obtenidos, etc. Esto es lo que normalmente se conoce como sobrecarga de información.
El 16 de marzo del 2020, la Oficina de Política Científica y Tecnológica de la Casa Blanca realizó un llamamiento a la comunidad internacional de Inteligencia Artificial para el desarrollo de técnicas de procesamiento de lenguaje natural y minería de textos que permitieran resolver [preguntas que la comunidad científica se está realizando sobre el COVID-19](https://www.whitehouse.gov/briefings-statements/call-action-tech-community-new-machine-readable-covid-19-dataset/). Muchas de estas preguntas están recogidas en la plataforma Kaggle ([https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/tasks](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/tasks)), bien conocida por la comunidad de Ciencia de Datos e Inteligencia Artificial. Aquí se han formulado algunas de esas preguntas que se espera poder responder al procesar toda esta literatura científica. De hecho, a fecha de hoy (27 de marzo de 2020) hay más de 350 trabajos registrados donde se han hecho distintos tipos de tratamientos de los textos que han sido proporcionados.
En nuestro grupo llevamos trabajando varios años en el procesamiento de grandes corpus de textos, así que desde la semana pasada nos hemos puesto a trabajar para intentar aportar nuestro grano de arena en la solución de estos problemas. Os vamos a contar lo que hemos hecho hasta ahora, y los recursos que ponemos a disposición del resto de la comunidad científica, por si pueden resultar útiles. Asimismo, hacemos un llamamiento a todos aquellos que queráis colaborar con nosotros con nuevas preguntas, con la validación de los resultados que hemos obtenido hasta ahora, con la aplicación de nuevos algoritmos, o con cualquier otra cosa que se os ocurra.
## Nuestra pregunta: ¿es posible relacionar de algún modo principios activos, grupos terapéuticos, pruebas, tratamientos y diagnósticos que se están reportando en la literatura científica?
La pregunta con la que hemos comenzado a trabajar es esta. Queremos ser capaces de ofrecer herramientas a la comunidad científica, así como a responsables de la gestión de la epidemia, que faciliten la navegación por el corpus para poder entender mejor qué tipos de tratamientos se han probado según lo reportado en la documentación científica, y así permitir entender mejor qué combinaciones de principios activos han sido probadas, lo que podría servir, por ejemplo, para definir nuevos protocolos clínicos de tratamiento para pacientes con algunas condiciones específicas.
Los servicios que hemos construido son la base sobre la que esperamos que este tipo de preguntas se puedan resolver. Aún queda mucho trabajo por hacer, sobre todo para proporcionar unas herramientas más cercanas a los usuarios finales. Por el momento nos hemos centrado en proporcionar las herramientas que otros desarrolladores también pueden utilizar para poder partir de un corpus ya refinado y anotado con una variedad de herramientas.
<!--<div class="posts-list">
{% for post in paginator.posts %}
<article class="post-preview">
<a href="{{ post.url | relative_url }}">
<h2 class="post-title">{{ post.title }}</h2>
{% if post.subtitle %}
<h3 class="post-subtitle">
{{ post.subtitle }}
</h3>
{% endif %}
</a>
<p class="post-meta">
Posted on {{ post.date | date: site.date_format }}
</p>
<div class="post-entry-container">
{% if post.image %}
<div class="post-image">
<a href="{{ post.url | relative_url }}">
<img src="{{ post.image | relative_url }}">
</a>
</div>
{% endif %}
<div class="post-entry">
{{ post.excerpt | strip_html | xml_escape | truncatewords: site.excerpt_length }}
{% assign excerpt_word_count = post.excerpt | number_of_words %}
{% if post.content != post.excerpt or excerpt_word_count > site.excerpt_length %}
<a href="{{ post.url | relative_url }}" class="post-read-more">[Read More]</a>
{% endif %}
</div>
</div>
{% if post.tags.size > 0 %}
<div class="blog-tags">
Tags:
{% if site.link-tags %}
{% for tag in post.tags %}
<a href="{{ '/tags' | relative_url }}#{{- tag -}}">{{- tag -}}</a>
{% endfor %}
{% else %}
{{ post.tags | join: ", " }}
{% endif %}
</div>
{% endif %}
</article>
{% endfor %}
</div>
{% if paginator.total_pages > 1 %}
<ul class="pager main-pager">
{% if paginator.previous_page %}
<li class="previous">
<a href="{{ paginator.previous_page_path | relative_url }}">← Newer Posts</a>
</li>
{% endif %}
{% if paginator.next_page %}
<li class="next">
<a href="{{ paginator.next_page_path | relative_url }}">Older Posts →</a>
</li>
{% endif %}
</ul>
{% endif %}-->
| 69.044444 | 1,083 | 0.735597 | spa_Latn | 0.98783 |
3802d0aeefb929725bec5ac9ad2b70f3119ef970 | 324 | md | Markdown | _posts/2017-06-14-winter-camp-17.md | mksilva42/maratonusp-blog | 3a894885527f36ef5597a1f816e30d10879d2d0d | [
"MIT"
] | 1 | 2020-11-26T14:34:27.000Z | 2020-11-26T14:34:27.000Z | _posts/2017-06-14-winter-camp-17.md | mksilva42/maratonusp-blog | 3a894885527f36ef5597a1f816e30d10879d2d0d | [
"MIT"
] | 7 | 2017-11-23T21:37:29.000Z | 2017-12-24T12:04:20.000Z | _posts/2017-06-14-winter-camp-17.md | maratonime/blog | 7c49fec72d793cb507b72d3d95890af7cb7569cf | [
"MIT"
] | 5 | 2018-12-26T13:49:09.000Z | 2021-07-25T14:20:33.000Z | ---
title: Inscrições abertas para o Winter Camp 17
date: '2017-06-14 11:42:00'
layout: post
categories:
- noticias
---
As inscrições para o Winter Camp 17 estão abertas! Mais informação [neste link]({{ site.baseurl }}/winter17).
[![logoWinter]({{ site.baseurl }}/images/house-of-carlos.png)]({{ site.baseurl }}/winter17)
| 27 | 109 | 0.709877 | por_Latn | 0.754477 |
380369ad52a2030a34d9b2e1648f97834ee82476 | 4,010 | markdown | Markdown | source/_posts/2016-10-14-copy-sync-files-from-to-remote-server.markdown | basti/icebergist.com | ebadd01c0a723c7294817cdec7e4d5962fef1292 | [
"MIT"
] | 1 | 2015-01-23T15:36:19.000Z | 2015-01-23T15:36:19.000Z | source/_posts/2016-10-14-copy-sync-files-from-to-remote-server.markdown | basti/icebergist.com | ebadd01c0a723c7294817cdec7e4d5962fef1292 | [
"MIT"
] | null | null | null | source/_posts/2016-10-14-copy-sync-files-from-to-remote-server.markdown | basti/icebergist.com | ebadd01c0a723c7294817cdec7e4d5962fef1292 | [
"MIT"
] | null | null | null | ---
redirect_to: https://www.axiomq.com/blog/copy-sync-files-from-to-remote-server/
layout: post
title: "Copy and sync files from/to remote server"
date: 2016-11-07 16:55:52 +0200
comments: true
author: Slobodan Kovačević
categories:
- Misc
tags:
- tips
- linux
- ubuntu
- apprenticeship
published: true
---
Most modern web app deployments have automated scripts that perform all tasks needed to deploy the app. They handle all the dirty details, while the developer just needs to do something simple like `cap deploy`. In other words, usually you don't need to access the remote servers directly.
However, sometimes you run into one-time tasks (or less frequent tasks) that might not have been automated. For example, dumping production data and importing on local machine, syncing uploaded files between production and staging environments, etc.
These often involve transferring files between your local machine and remote server (or two remote servers). There are few ways you can handle this depending on what you need to transfer between servers. We are going to cover methods using `wget`, `scp`, and `rsync`.
<!--more-->
## wget
Simplest option is to install `wget` on destination machine. `wget` is the non-interactive network downloader and you just give it the URL of the file you want to download.
`wget http://example.com/some/archive.tar.gz`
That would download the file to current directory.
The downside is that you have to put the file somewhere accessible via web, like `public` dir in Rails apps and also you should remember to remove it once you are done with it.
Also this only works if you have a single file, or you are able to create a single file (most likely by [creating an archive using tar and gzip](/posts/create-and-extract-archives-using-tar-and-gzip/)).
## scp
`scp` is a remote file copy program and the name is short for __s__ecure __c__o__p__y. It's very similar to usual `cp` command with the difference that it's able to copy files across different computers using SSH.
Simplest forms of `scp` have source and destination, both of which can be either a local file or a remote file.
```sh
# copy file from server to current dir on local machine
scp [email protected]:/home/myuser/databasedump.sql ./
# copy file to remote server
scp ./some/localfile.txt [email protected]:/home/myuser/
```
In a sense it works exactly like `cp`. Difference is that when specifying a remote file the format looks like you concatenated SSH user@server string and normal file path (you are saying "as user at server get this file").
There are some additional nice things about `scp`:
- you can use `-r` option which will recursively copy entire directories.
- you can specify two remote files and it will copy them between remote servers.
## rsync
`scp` is secure version of `rcp` (remote copy) program. `rsync` is faster, flexible replacement for `rcp`. It copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts).
There are many ways you can use `rsync`. The most usual variants are:
```sh
# copy all files recursively from one local dir to another
rsync ./source_dir ./destination_dir
# copy a file from local dir to remote server
rsync -Pavz ./archive.tar.gz [email protected]:/home/myuser/somedata/
# copy all files recursively from remote server to local dir
rsync -Pavz [email protected]:/home/myuser/somedata ./data
```
Options that were used are not strictly necessary for doing stated tasks, but they help. They are as follows:
- `P` - same as --partial --progress. It means it will show transfer progress and if transfer breaks keep a partial copy to possibly continue on retry. Very useful for large files.
- `a` - it is a quick way of saying you want recursion and want to preserve almost everything. This is equivalent to -rlptgoD.
- `v` - be verbose.
- `z` - compress file data during the transfer.
I suggest that you consult `man rsync` for more details.
| 47.176471 | 289 | 0.765835 | eng_Latn | 0.999175 |
3803812ec92a55fb321919ae2c9cc65cc2b654cc | 3,514 | md | Markdown | Readme.md | TylerRick/paper_trail-active_record | 838c8cca35c0703fdd8a2515928020b2189c105b | [
"MIT"
] | null | null | null | Readme.md | TylerRick/paper_trail-active_record | 838c8cca35c0703fdd8a2515928020b2189c105b | [
"MIT"
] | null | null | null | Readme.md | TylerRick/paper_trail-active_record | 838c8cca35c0703fdd8a2515928020b2189c105b | [
"MIT"
] | null | null | null | # PaperTrail::ActiveRecordExt
[![Gem Version][1]][2]
[![Yard Docs](http://img.shields.io/badge/yard-docs-blue.svg)](https://rdoc.info/github/TylerRick/paper_trail-active_record/master)
Various ActiveRecord extensions to make your life easier when working with
[PaperTrail](https://github.com/paper-trail-gem/paper_trail) versions and versioned model records.
## Methods added to models with `has_paper_trail`
- `.versions`
- `.find_deleted_version`
- `.find_deleted`
- `.has_many_versions`
- `.has_related_versions`
- `.has_versions_with_all_related`
- `created_version`
- `paper_trail_update_column_using_value_from_changes`
- `paper_trail_update_columns_using_value_from_changes`
## Methods added to `PaperTrail::Version` (`VersionConcern`)
- `.preceding_inclusive`
- `.between_inclusive`
- `scope :where_object_changed`
- `scope :where_object_changed_any`
- `#action`
- `#item_class`
## `{association}_or_deleted`
### `def define_assoc_or_deleted(assoc_name, suffix: nil)`
Defines a `{association}_or_deleted` method for the given association. This method will call
the usual association method to try to find the associated record but if that returns nil,
will fall back to looking for a deleted record from the `versions` history (using
`klass.find_deleted`).
You can replace the `or_deleted` part with a different suffix using `suffix:` option.
You can even give it the same name as the existing association method if you want to override
the existing method with one that always falls back to looking for a deleted record.
```ruby
class Post
belongs_to :author
# overrides author method with a version that finds deleted if not found
define_assoc_or_deleted :author, suffix: nil
```
### Automatically add for all assocations
If you include `PaperTrail::ActiveRecordExt::OrDeleted` into a model, it will automatically add a `{association}_or_deleted`
method for every `belongs_to` or `has_one` association that is defined.
Because it reflects on all associations on that model as soon as it is included, make sure to
include it *after* all of your associations are defined. You can also call
`define_assoc_or_deleted_on_all_associations` at the end of your model class (that is the same
method that including the module triggers).
If you want it to automatically be added for all assocations on *all* application models, you can
use [gem 'active_record_include'](https://github.com/TylerRick/active_record_include) like this:
```ruby
class ApplicationRecord < ActiveRecord::Base
include_when_connected PaperTrail::ActiveRecordExt::OrDeleted
```
## Installation
Add this line to your application's Gemfile:
```ruby
gem 'paper_trail-active_record'
```
And then execute:
$ bundle
## Development
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
## Contributing
Bug reports and pull requests are welcome on GitHub at
https://github.com/TylerRick/paper_trail-active_record.
[1]: https://badge.fury.io/rb/paper_trail-active_record.svg
[2]: https://rubygems.org/gems/paper_trail-active_record
| 36.989474 | 324 | 0.778315 | eng_Latn | 0.980729 |
3803a96349f1ac3d89ee064951e5dd6cbea6a13d | 3,820 | md | Markdown | README.md | guillaumedoutriaux/Name-That-Color | 07f95e5170fc6a9c354d01767542d3b56cc372e7 | [
"MIT"
] | 21 | 2017-01-25T21:06:05.000Z | 2022-02-18T13:03:54.000Z | README.md | guillaumedoutriaux/Name-That-Color | 07f95e5170fc6a9c354d01767542d3b56cc372e7 | [
"MIT"
] | 11 | 2017-01-17T19:21:20.000Z | 2022-02-17T19:56:51.000Z | README.md | guillaumedoutriaux/Name-That-Color | 07f95e5170fc6a9c354d01767542d3b56cc372e7 | [
"MIT"
] | 4 | 2017-04-02T21:34:58.000Z | 2020-05-10T20:37:26.000Z | # Name That Color
A VS Code Plugin to convert hex or rgb color representation into friendly names built upon Chirag Mehta's [ntc.js](http://chir.ag/projects/ntc/).
![Badge for version for Visual Studio Code extension Name That Color](https://vsmarketplacebadge.apphb.com/version/guillaumedoutriaux.name-that-color.svg?color=blue&style=flat-square&logo=visual-studio-code)
![Installs](https://vsmarketplacebadge.apphb.com/installs-short/guillaumedoutriaux.name-that-color.svg?color=blue&style=flat-square)
![Downloads](https://vsmarketplacebadge.apphb.com/downloads-short/guillaumedoutriaux.name-that-color.svg?color=blue&style=flat-square)
![MIT License](https://img.shields.io/github/license/guillaumedoutriaux/name-that-color?color=blue&style=flat-square)
## Features
- Get a friendly name from color representation
- Generate Sass or CSS variables name from color representation
- Deals with close color representations (return the same name)
- Works with single, multiple and block selections
- Works with Hex and RGB
## Usage
### Get color name
- Select a color
- From the command palette Ctrl-Shift-P (Windows, Linux) or Cmd-Shift-P (OSX),
- Select Name that color : get color name
![feature get color name](https://github.com/guillaumedoutriaux/name-that-color/raw/master/images/feature-get.gif)
### Replace selectin with color name
- Select a color
- From the command palette Ctrl-Shift-P (Windows, Linux) or Cmd-Shift-P (OSX),
- Select Name that color : replace selection
![feature replace color code with friendly name](https://github.com/guillaumedoutriaux/name-that-color/raw/master/images/feature-replace.gif)
### Generate Sass variable
- Select a color
- From the command palette Ctrl-Shift-P (Windows, Linux) or Cmd-Shift-P (OSX),
- Select Name that color : generate Sass variable
![feature generate sass variable](https://github.com/guillaumedoutriaux/name-that-color/raw/master/images/feature-sassvar.gif)
### Generate CSS variable
- Select a color
- From the command palette Ctrl-Shift-P (Windows, Linux) or Cmd-Shift-P (OSX),
- Select Name that color : generate CSS variable
![feature generate css variable](https://github.com/guillaumedoutriaux/name-that-color/raw/master/images/feature-cssvar.gif)
> Tip: It works for single, multiple and block selection as well.
![feature multiple selection](https://github.com/guillaumedoutriaux/name-that-color/raw/master/images/feature-multiple.gif)
> Tip: You can choose the delimiter used in variables.
![choose variable delimiter](https://github.com/guillaumedoutriaux/name-that-color/raw/master/images/settings-delimiter.gif)
> Tip: You can add a prefix and/or a suffix to variables.
![add variable prefix or suffix](https://github.com/guillaumedoutriaux/name-that-color/raw/master/images/prefix-suffix.png)
> Tip: You can use both Hex and RGB colors.
![hexadecimal and rgb colors are supported](https://github.com/guillaumedoutriaux/name-that-color/raw/master/images/feature-rgb.gif)
## Installation
If you have any requirements or dependencies, add a section describing those and how to install and configure them.
- Install Visual Studio Code 1.5.0 or higher
- Launch Code
- From the command palette Ctrl-Shift-P (Windows, Linux) or Cmd-Shift-P (OSX)
- Select Install Extension
- Choose the extension (Name That Color)
- Reload Visual Studio Code
## Source
[GitHub](https://github.com/guillaumedoutriaux/name-that-color)
## Contribute
If you have any problem, idea or suggestion, feel free to create issues and pull requests on [GitHub](https://github.com/guillaumedoutriaux/name-that-color).
### Credit
Chirag Mehta [http://chir.ag](http://chir.ag)
Guillaume Doutriaux (@gdoutriaux)
Przemysław Adamczewski (CSS Variables)
### License
MIT - [https://opensource.org/licenses/MIT](https://opensource.org/licenses/MIT)
| 38.979592 | 207 | 0.775131 | eng_Latn | 0.739166 |
3804e329c57c618445af03aeca554940022976c1 | 1,277 | md | Markdown | _posts/2021-04-11-帕梅拉.md | sun-tian/sun-tian.github.io | 179a0646232128b39ea12156ff292b394e9f24a3 | [
"MIT"
] | 1 | 2020-09-27T02:12:04.000Z | 2020-09-27T02:12:04.000Z | _posts/2021-04-11-帕梅拉.md | sun-tian/sun-tian.github.com | 179a0646232128b39ea12156ff292b394e9f24a3 | [
"MIT"
] | 181 | 2019-08-21T12:09:31.000Z | 2021-04-21T07:55:17.000Z | _posts/2021-04-11-帕梅拉.md | sun-tian/sun-tian.github.com | 179a0646232128b39ea12156ff292b394e9f24a3 | [
"MIT"
] | null | null | null | ---
layout: post # 使用的布局(不需要改)
title: 帕梅拉 # 标题
subtitle: 健身 #副标题
date: 2021-04-11 # 时间
author: 甜果果 # 作者
header-img: https://cdn.jsdelivr.net/gh/tian-guo-guo/[email protected]/assets/img/post-bg-swift2.jpg #这篇文章标题背景图片
catalog: true # 是否归档
tags: #标签
- 读书笔记
---
# 帕梅拉
Bilibili: **[帕梅拉PamelaReif](https://space.bilibili.com/604003146)**
## 跳舞
帕姐帕梅拉 有氧舞蹈系列燃脂 45min 自用
<iframe width="560" height="315" src="//player.bilibili.com/player.html?aid=626079715&bvid=BV1gt4y197jF&cid=202302813&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
## 腹部
【中字】10分钟腹部训练,一周见马甲线!(高级&初级&中级&全方位训练)
<iframe width="560" height="315" src="//player.bilibili.com/player.html?aid=78492244&bvid=BV1jJ411v7Tf&cid=134299684&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
帕梅拉 - 10min 下腹部训练|紧致小腹 平坦小肚子 (Pamela Reif Official)
<iframe width="560" height="315" src="//player.bilibili.com/player.html?aid=628231081&bvid=BV1rt4y1k7Wq&cid=267139339&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
etc... | 37.558824 | 220 | 0.641347 | yue_Hant | 0.115585 |
3806c9f9fdc4a51a21a9404a7bd7883268fd2a52 | 2,637 | md | Markdown | README.md | balint42/nodeREST | f25010d6c8676b56050fecd5d7892f5231c6b3eb | [
"MIT"
] | null | null | null | README.md | balint42/nodeREST | f25010d6c8676b56050fecd5d7892f5231c6b3eb | [
"MIT"
] | null | null | null | README.md | balint42/nodeREST | f25010d6c8676b56050fecd5d7892f5231c6b3eb | [
"MIT"
] | null | null | null | # nodeREST
This is a demo for a full stack, nodeJS based single page application that implements RESTful APIs for user authentication and managment of the resources "user" and "expense". It showcases a simple app where users can sign-up, log-in and create "expense" records as well as view stats of their expenses over time.
The user authentication is RESTful based on JWT tokens and implements 3 roles:
- admins can CRUD on all resources
- managers can CRUD on all users (except admin) and own expenses
- users can CRUD on own expenses
The front end is based on the great semantic UI and jQuery.
The goal of this app is to show best practices and provide a boilerplate for a nodeJS based single page app that implements RESTful user authentication with roles and RESTful user & resource managment.
## Configuration
For testing / dev environments copy the `test.env.tmpl` and `development.env.tmpl` files and rename them in `test.env` and `development.env` and adjust the values as needed. Then
- use `grunt test` to run tests
- use `grunt dev` to run the app in dev mode
In production environment you have to set all the environment variables from these files on your system while adjusting the values as needed. **Be sure to set `NODE_ENV=production`.** The connection to `mongodb` must be made with an admin user.
As and admin this should be all you need. As a developer you might also want to look into `config/config.js`.
## Logging
The app logs info, debug and error messages to `log/` and to the standard out. It is recommended to use the standard out to collect all messages to be sent to other services. Note that log files can become quite huge, set the log level via the `LOG_LEVEL` environment variable.
## Health Checks
The app provides a `/health` route that will return `{ "status": "OK" }` if the app runs as expected. Use this for health monitoring.
## Routes
All app API endpoints can be reached under their corresponding version as in `/v1/*`. The API behaves RESTful based on standard HTTP request methods.
## Deployment
IMPORTANT: the app **must** be deployed & used with an SSL certificate to secure the connections as passwords are being sent unencrypted!
This app was developed for any Linux OS with
- `node.js` >= 6.2.0
- `npm` >= 3.8.9
- `mongodb` >= 3.2.0
and for testing / dev environment
- `eslint` >= 2.11.0
- `grunt-cli` >= 1.2.0 & `grunt` >= 0.4.5
After installing these dependencies and configuring as described under "Configuration" run `npm install` (and `grunt test` in dev environment).
Run the service via `node server.js`.
NOTE: the connection to `mongodb` must be made as an admin user.
| 61.325581 | 313 | 0.761092 | eng_Latn | 0.998267 |
38085906c6a0c2169d0622015500befe8ec69fac | 9,438 | md | Markdown | _posts/2020-4-20-PY1.md | Magnificent98/Magnificent98.github.io | 00c28a81d1c804bb786180cddfb33811994bc611 | [
"MIT"
] | 1 | 2020-03-27T01:45:50.000Z | 2020-03-27T01:45:50.000Z | _posts/2020-4-20-PY1.md | Magnificent98/Magnificent98.github.io | 00c28a81d1c804bb786180cddfb33811994bc611 | [
"MIT"
] | null | null | null | _posts/2020-4-20-PY1.md | Magnificent98/Magnificent98.github.io | 00c28a81d1c804bb786180cddfb33811994bc611 | [
"MIT"
] | null | null | null | ---
layout: post
title: Python(I)
category: PYTHON
date: 2020-4-20
---
# And here we start Python! I like python!
## 1.Overall
<p> python是一种强类型语言。还是一种动态类型语言。强类型语言决定了它不经过强制类型转换,就会一直呆在一个类型不动。而动态类型语言决定了一个变量可以被赋成不同类型的值。 </p><br/>
<p> python并不能直接编译出机器码,而是依托python解释器执行,在这个层面上,python是一个解释型语言。它与其他语言不同的地方也在于,它是采用基于值的管理方式。具体情况如下: </p><br/>
```python
if __name__ == "__main__":
a = 5
b = 5
print id(a) #id = 140438844511320
print id(b) #id = 140438844511320
print id(5) #id = 140438844511320
```
<br/>
<p> python的代码风格独树一帜,没有句尾符号,采用缩进区分行与块。注释使用‘#’引导。python是个纯的面向对象的语言,在python中一切都是对象。 </p><br/>
<p> 为了学好python,dir() help()是两个常用的方法。前者列出了指定模块中的所有成员,后者返回指定模块或者函数的说明文档。</p><br/>
```python
import math
print dir(math)
help(math.acos)
```
<br/>
<p> 最后献上 The Zen of Python。努力让自己编写的代码更加优雅,更加 pythonic。</p><br/>
```python
import this
#The Zen of Python, by Tim Peters
#Beautiful is better than ugly.
#Explicit is better than implicit.
#Simple is better than complex.
#Complex is better than complicated.
#Flat is better than nested.
#Sparse is better than dense.
#Readability counts.
#Special cases aren't special enough to break the rules.
#Although practicality beats purity.
#Errors should never pass silently.
#Unless explicitly silenced.
#In the face of ambiguity, refuse the temptation to guess.
#There should be one-- and preferably only one --obvious way to do it.
#Although that way may not be obvious at first unless you're Dutch.
#Now is better than never.
#Although never is often better than *right* now.
#If the implementation is hard to explain, it's a bad idea.
#If the implementation is easy to explain, it may be a good idea.
#Namespaces are one honking great idea -- let's do more of those!
```
<br/><br/>
## 2.Structure
<p> 面对一个新的风格的编程语言,先从代码结构开始了。 </p><br/>
### 2.1 选择结构
```python
#coding=utf-8
if __name__ == '__main__':
a = input('input a:')
b = input('input b:')
# 单分支选择结构
if a > b:
a, b = b, a
print(a, b)
# 双分支选择结构
chTest = [1, 2, 3, 4, 5]
if chTest:
print(chTest)
else:
print('Empty')
# 多分支选择结构
score = 90
if score > 100:
print('wrong')
elif score >= 90:
print('A')
elif score >= 80:
print('B')
elif score >= 60:
print('C')
else:
print('F')
# 特殊表现形式
# value1 if condition else value2
x = 10
print(6 if x >= 10 else 5)
```
<p> 下面给出一个小案例 </p><br/>
```python
#encoding=utf-8
# 计算今年过了多少天
import time
date = time.localtime()
year = date[0]
month = date[1]
day = date[2]
day_month = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
if year % 400 == 0 or (year % 4 == 0 and year % 100 != 0):
day_month[1] = 29
if month == 1:
print(day)
else:
print(sum(day_month[:month-1])+day)
```
<br/>
### 2.2 循环结构
```python
#encoding=utf-8
if __name__ == '__main__':
# while循环
# 可以加else子句
x = 1
while x != 10:
print(x)
x += 1
else:
print('out')
# for循环
# for 取值 in 序列或迭代对象:
s = 0
for i in range(1, 101):
s += i
else:
print(s)
# break 与 continue 的作用方式与 C 中相同
```
<p> 在循环结构中可以多使用局部变量来提高运算速度。</p><br/>
```python
#encoding=utf-8
import time
import math
# 设置一个计时器
start = time.time()
for i in xrange(10000000):
math.sin(i)
print('Time used with global variable:', time.time()-start)
# 将sin函数变成局部变量
loc_sin = math.sin
start = time.time()
for i in xrange(10000000):
loc_sin(i)
print('Time used with local variable:', time.time()-start)
# results:
# ('Time used with global variable:', 2.6982831954956055)
# ('Time used with local variable:', 2.26613712310791)
```
<br/><br/>
## 3.Function
<p> 学习完结构,学习函数的设计与使用。 </p><br/>
```python
#encoding=utf-8
# 定义函数
# 不需要返回类型,形参类型。
# 可以加一段注释,为使用函数的人提供方便
def printMax(a, b):
'''This function accept two integers and print the bigger one.'''
if a > b:
print(a)
else:
print(b)
if __name__ == '__main__':
# 使用help()查看函数帮助时,显示的是自己编辑的注释
help(printMax)
a = input('input a:')
b = input('input b:')
printMax(a, b)
```
<br/>
<p> 一般在函数的内部直接修改形参是不会引起实参变化的。如果传递给函数的是python的可变序列,且在函数内部使用下标或其它方式改变可变序列的时候,修改后的结果可以反映到函数。 </p><br/>
```python
#encoding=utf-8
# 多数情况下形参改变不会反应到实参
def modify(a):
a += 1
print("a in modify "+str(a))
if __name__ == '__main__':
a = 5
modify(a)
print("a in main "+str(a))
#################################
#encoding=utf-8
# 可变序列,用下标访问,可以改变元素的值
def modify(a):
a[0] += 1
print("a in modify ", a)
if __name__ == '__main__':
a = [1, 2, 3]
modify(a)
print("a in main ", a)
```
<br/>
### 3.1 形式参数类型
<p> python中函数的参数类型有以下几种: </p><br/>
<p>1. 默认值参数</p><br/>
```python
#encoding=utf-8
# 默认值参数
# 带默认值的参数必须出现在函数的最右端
def say(message, times=1):
'''accpet message and n.
to say the message in n times.'''
print((message+' ')*times)
if __name__ == '__main__':
help(say)
say("hello")
say("hello", 5)
# 查看函数默认参数的当前值
print(say.func_defaults)
# 可变默认参数的坑
# 这里old_list是一个可变默认参数,在定义函数的时候,就生成了一个空的list作为它的值
def demo(newitem, old_list=[]):
old_list.append(newitem)
return old_list
print(demo('5',[1, 2, 3, 4]))
print(demo('a'))
# 因为执行了append函数,此时的可变默认参数old_list已经变成了'a'
print(demo.func_defaults)
print(demo('b'))
# 那么正确的做法是什么呢?
def demo(newitem, old_list=None):
if old_list is None:
old_list = []
old_list.append(newitem)
return old_list
```
<br/>
<p>2. 关键参数</p><br/>
```python
#encoding=utf-8
# 关键参数指的是调用函数时的参数传递方式。
def demo(a, b, c=5):
print(a, b, c)
# 在调用函数的时候,如果忘记了参数的顺序,可以按照参数名字进行传值。
if __name__ == '__main__':
demo(a=7, b=2, c=6)
```
<br/>
<p>3. 可变长度参数</p><br/>
```python
#encoding=utf-8
# 当形参是*p的时候,表示接受任意多个实参放入一个元组中
def demo(*p):
print(p)
# 当形参是**p的时候,表示接受任意多个显式赋值放入一个字典中
def demo2(**p):
print(p)
if __name__ == '__main__':
demo(1,2,3)
demo2(x=1,y=2,z=3)
# result:
# (1, 2, 3)
# {'y': 2, 'x': 1, 'z': 3}
```
<br/>
<p>4. 参数传递时的序列解包</p><br/>
```python
#encoding=utf-8
# 使用列表,元组,集合,字典等可迭代对象作为实参
def demo(a,b,c):
print(a, b, c)
if __name__ == '__main__':
seq = [1, 2, 3]
# 在实参名前面加一个* python来帮我们自动解包
demo(*seq)
dic = {1: 'a', 2: 'b', 3: 'c'}
# 在字典的情况下,默认使用key
demo(*dic)
# 要想使用key-value,则使用items()方法
demo(*dic.items())
# 要想使用值,则使用values()方法
demo(*dic.values())
```
<br/>
### 3.2 变量作用域
```python
# encoding=utf-8
def demo():
# 在函数中声明一个变量x为全局变量
global x
x = 3
y = 4
print(x, y)
if __name__ == '__main__':
# 声明一个全局变量x
x = 5
# 调用了demo函数,因为已经有全局变量x
# demo中的x即全局变量x
demo()
print x
# y不存在
# print y
del x
# 此时没有x,但是demo中声明了全局变量x
demo()
print x
```
<br/>
### 3.3 lambda表达式
<p> lambda表达式是一种具有函数表达功能的表达式。它又称为匿名函数,功能堪比函数,设计更加简洁。 </p><br/>
```python
# encoding=utf-8
# lambda 参数表: 表达式
f = lambda x, y, z: x+y+z
print(f(1, 2, 3))
def demo(x, y, z):
print(x, y, z)
# lambda表达式可以调用函数
g = lambda x, y=4, z=5: demo(x, y, z)
g(1)
# 注意:lambda表达式可以看做是一个函数,也遵循变量的作用域规则
r = []
for x in range(10):
# r是一个lambda表达式序列
r.append(lambda : x**2)
# 执行的时候,x=9
print r[0]()
r = []
for x in range(10):
r.append((lambda n=x: n**2))
# 执行的时候,lambda n=0: n**2
print r[0]()
```
<br/>
### 3.4 高级话题
<p>1. map()</p><br/>
```python
#encoding=utf-8
# map()可以将一个单参数函数依次作用到一个序列或者迭代器对象的每一个元素上。
# 返回结果是一个列表;该函数不对原序列做任何修改
L = [1, 2, 3, 4, 5]
result = map((lambda x: x+5), L)
print result
# [6, 7, 8, 9, 10]
result = map(str, L)
print result
# ['1', '2', '3', '4', '5']
```
<br/>
<p>2. reduce()</p><br/>
```python
#encoding=utf-8
# reduce()可以将一个接受两个参数的函数以累积的方式
# 从左到右依次作用到一个序列或者迭代器对象的所有元素上
L = [1, 2, 3, 4, 5]
result = reduce((lambda x, y: x+y), L)
# 过程为:
# (((1+2)+3)+4)+5
print result
# 15
result = reduce((lambda x, y: x+y), map(str, L))
print result
# 12345
```
<br/>
<p>3. filter()</p><br/>
```python
#encoding=utf-8
# filter()可以将一个单参数函数作用到一个序列上
# 返回使函数返回true的元素组成的序列
L = [1, 2, 3, 4, 5]
result = filter((lambda x: x <= 3), L)
print result
# [1, 2, 3]
```
<br/>
<p>4. 生成器</p><br/>
```python
#encoding=utf-8
# 在python中,使用了yeild的函数被称为生成器
# 生成器可以理解为一个迭代器,只能用于迭代操作
# 每次调用生成器,遇到yeild语句会暂停并保存当前的信息,返回yeild后的东西
# 调用next()方法时,会从yeild语句下一句开始执行
# 生成器第一弹
# 这样可以一次生成一个,避免所有数据都在内存中
# 大数据的时候效果比较明显
mygenerator = (x*x for x in range(3))
for i in range(3):
print mygenerator.next()
# 生成器第二弹
# 带有yeild语句的函数是一个生成器
def fib():
a, b = 1, 1
while True:
yield a
a, b = b, a+b
a = fib()
for i in range(10):
print a.next()
```
<br/>
<p>5. 字节码</p><br/>
```python
#encoding=utf-8
# 使用dis模块的功能可以查看函数的字节码指令
# 这些都是交给python虚拟机执行的
import dis
def add(n):
n += 1
return n
dis.dis(add)
# result
# 5 0 LOAD_FAST 0 (n)
# 3 LOAD_CONST 1 (1)
# 6 INPLACE_ADD
# 7 STORE_FAST 0 (n)
# 6 10 LOAD_FAST 0 (n)
# 13 RETURN_VALUE
```
<br/>
<p>6. 函数嵌套定义与可调用对象</p><br/>
```python
#encoding=utf-8
# 在python中,函数是可以嵌套定义的
def linear(a, b):
def result(x):
# 此时a,b都是对result函数可见的
return a * x + b
return result
a = 5
b = 10
print linear(a, b)(3)
# 在类中定义__call__函数
# 该类的对象是可以直接调用的
class linear:
def __init__(self, a, b):
self.a, self.b = a, b
def __call__(self, x):
return self.a * x + self.b
taxes = linear(5, 10)
# 直接调用该类的对象
print taxes(3)
```
<br/> | 17.807547 | 117 | 0.607544 | eng_Latn | 0.29306 |
3809c24e6146951aca3a6e0ad2e05c1f187c6497 | 3,366 | md | Markdown | info/web/setup/apache.md | lgs-games/memorize | aab49b0d44d70502b576edf09fdb95f47ed1d8ae | [
"Apache-2.0"
] | 2 | 2021-09-14T18:43:24.000Z | 2022-03-05T11:56:32.000Z | info/web/setup/apache.md | lgs-games/memorize | aab49b0d44d70502b576edf09fdb95f47ed1d8ae | [
"Apache-2.0"
] | 18 | 2021-08-28T17:30:05.000Z | 2021-09-14T10:28:00.000Z | info/web/setup/apache.md | lgs-games/memorize | aab49b0d44d70502b576edf09fdb95f47ed1d8ae | [
"Apache-2.0"
] | 1 | 2021-08-16T17:12:04.000Z | 2021-08-16T17:12:04.000Z | # Apache server
[Go back](../index.md#webserver)
I'm not an expert in setting up an Apache Server. I'm quite fond of [Digital ocean tutorials](https://www.digitalocean.com/community/tutorials/how-to-install-the-apache-web-server-on-debian-10) (or [this one for MariaDB+PHP](https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mariadb-php-lamp-stack-on-debian-10)).
My notes
* You may enable .htaccess with `AllowOverride All`.
* You can add SSL certificates with [Let's encrypt](https://certbot.eff.org/lets-encrypt/debianbuster-apache)
* The logs are inside `/var/log/apache2` (default)
* You can configure [Postfix](https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-postfix-as-a-send-only-smtp-server-on-debian-10) to send mails. This is complex, and I read a lot of tutorials (DMarc, SPF, DKim, etc.). You may use [mail-tester.com](https://www.mail-tester.com/) to test your server (don't forget to wait for around 12h before checking again).
Some commands you may use
```bash
# create a conf for each website
cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/example.com.conf
# complete it (add *:443, redirect www to without www)
# enable SSL, add Protocols, etc.
vim /etc/apache2/sites-available/example.com.conf
# once you're done, enable the conf
sudo a2ensite example.com.conf
# test config
sudo apachectl configtest
# restart
sudo systemctl reload apache2
```
<hr class="sl">
## HTTP2.0
You should have a line like this in your conf: `Protocols h2 http/1.1` (if not, add it, in VirtualHost). If not, then add it. It means that if h2 is available, then it will be used, otherwise, http/1.1 will be used. Then, server-side, you should use these commands to enable HTT2 (short version of this [complete guide](https://http2.pro/doc/Apache)).
```bash
# you will have to add sudo/use root
a2enmod http2
apachectl restart
apachectl stop
# if needed
apt-get install php7.1-fpm
a2enmod proxy_fcgi setenvif
a2enconf php7.1-fpm
# it won't really disable php7.1
a2dismod php7.1
a2dismod mpm_prefork
a2enmod mpm_event
apachectl start
```
<hr class="sr">
## Reports/Website stats
<details>
<summary>You generate reports from your Apache logs using <b>awstats</b>. It was used by OVH, but they moved to their own-made tool in late 2020. I used it with the command line, like this</summary>
```bash
sudo apt-get install htmldoc
wget https://prdownloads.sourceforge.net/awstats/awstats-7.8.tar.gz
sudo mkdir /usr/local/awstats
sudo mv awstats-7.8/* /usr/local/awstats
# generate (once)
cd /usr/local/awstats/
./tools/awstats_configure.pl
# -----> Check for web server install
# > none
# -----> Need to create a new config file ?
# ... file (required if first install) [y/N] ? y
# -----> Define config file name to create
# > website_url_or_name
# -----> Define config file path
# > /etc/awstats
# result:
cat /etc/awstats/awstats.website_url_or_name.conf
# update
sudo perl wwwroot/cgi-bin/awstats.pl -config=website_url_or_name -update
# generate
sudo perl /usr/local/awstats/tools/awstats_buildstaticpages.pl -config=website_url_or_name -month=all -year=2020 -dir=/tmp/folder/ -buildpdf=/usr/bin/htmldoc
# PDF file is 'awstats.website_url_or_name.pdf'
ls -la /tmp/folder/awstats.website_url_or_name.pdf
```
</details>
You can also check [goaccess](https://goaccess.io/). | 37.820225 | 385 | 0.751931 | eng_Latn | 0.731282 |
380a1dc5aec6ad7e44b17cf1863d3a31d0ef557b | 1,782 | md | Markdown | _posts/2019-10-05-data_engineering_1.md | getChan/getChan.github.io | 8fc6eb03f9ffda357c9969c6878a7ffd2021fd32 | [
"MIT"
] | null | null | null | _posts/2019-10-05-data_engineering_1.md | getChan/getChan.github.io | 8fc6eb03f9ffda357c9969c6878a7ffd2021fd32 | [
"MIT"
] | 9 | 2019-11-28T06:07:05.000Z | 2022-03-29T12:54:57.000Z | _posts/2019-10-05-data_engineering_1.md | getChan/getChan.github.io | 8fc6eb03f9ffda357c9969c6878a7ffd2021fd32 | [
"MIT"
] | null | null | null | ---
title: "DE 스터디(1) - 리눅스 커맨드"
excerpt: "data engineering 스터디 정리입니다."
categories:
- data
tags:
- study
- linux
- system
last_modified_at: 2019-10-05T08:06:00-05:00
---
# 리눅스 커맨드
## 기본 파일 다루기
- 파일 생성 : `touch`, `cat`
- 파일 복사 : `cp <파일경로>/<복사할 파일명> <복사할 경로>/<저장할 파일명>`
- `cp a* ../B/` : a로 시작하는 파일명을 가진 파일들을 상위 폴더 내의 B 폴더로 복사해라
- `cp -r ./B ./D` : B폴더 전부를 D폴더로 복사(`-r` : recursive)
- pwd : 현재 폴더의 절대경로
- 이름변경 : `rename <공통부분> <변경양식> <참조파일의 형식>`
- `rename a_ b_ a_?.txt` : 'a_?.txt'와 같은 형식의 파일의 'a_'를 'b_'로 바꿔라
- 파일이동 : `mv <from> <to>`
- `mv ./d_?.txt ../C` : 현재 `d_?.txt` 형식의 파일을 C폴더로 옮긴다.
- `mv -r <from> <to>` : 폴더 이동
## yum
RPM 기반의 시스템을 위한 자동 업데이터 겸 패키지 설치/제거 도구이다
- `sudo yum instll wget`
> wget : 특정 url에 있는 파일을 가져온다.
## shell script
```shell
name="chan"
echo $name
echo "Hello world ${name}"
```
### 실습
1. `test` 폴더 만들기
2. `A` 폴더 만들기
3. `A` 폴더 안에 `a_1.txt` ~ `a_4.txt` 만들기
4. `A` 폴더를 `B`폴더로 복사
5. `B` 폴더 내 파일명 변경 : `a_` -> `b_`
```shell
mkdir -p ./test/A/ # 1,2
touch ./test/A/{a_1,a_2,a_3}.txt # 3
cp -r ./test/A ./test/B # 4
cd ./test/B/
rename a_ b_ a_?.txt # 5
```
```shell
folder_name='test_2'
mkdir -p $folder_name/A/
touch $folder_name/A/{a_1,a_2,a_3}.txt
cp -r $folder_name/A $folder_name/B
cd $folder_name/B/
rename a_ b_ a_?.txt
```
### 환경 변수
`echo $HOME` : 홈 디렉토리를 출력한다
`export` : 환경 변수들을 보여준다.
환경 변수 추가하기 : `export folder_name = 'test_1000'`
다시 쉘 스크립트 실행하면 환경 변수를 그대로 가져와서 `test_1000` 폴더를 생성한다.
### 인자 전달하기
```shell
folder_name=$1 # 첫 번째 positional argment
mkdir -p $folder_name/A/
touch $folder_name/A/{a_1,a_2,a_3}.txt
cp -r $folder_name/A $folder_name/B
cd $folder_name/B/
rename a_ b_ a_?.txt
```
`sh test.sh test_args`로 커맨드를 하면 `test_args`가 `folder_name`이 된다.
## 파이썬 파일과 연동하기
파이썬 파일을 쉘 스크립트에서 실행할 수 있다. --> 무한 응용 가능~~!! | 20.964706 | 66 | 0.608305 | kor_Hang | 0.999915 |
380b60312aaed515b6b2cbac70a73bf58cf69434 | 60 | md | Markdown | README.md | pongwudp/WoodyToken | 43dc21776c1a9961a7d2ffa544738aeaa45bc721 | [
"MIT"
] | null | null | null | README.md | pongwudp/WoodyToken | 43dc21776c1a9961a7d2ffa544738aeaa45bc721 | [
"MIT"
] | null | null | null | README.md | pongwudp/WoodyToken | 43dc21776c1a9961a7d2ffa544738aeaa45bc721 | [
"MIT"
] | null | null | null | # WoodyToken
As far as i can
https://youtu.be/jLiaJHIWNkg
| 10 | 28 | 0.733333 | kor_Hang | 0.462018 |
380bab666809f1aa6afadff194dd0f95b05d5217 | 1,041 | md | Markdown | packages/client/docs/modules/_logging_.md | 0xHashstack/ethereumjs-vm | 582b4efd86ed59ace758de9369be2292b7a5f61e | [
"MIT"
] | 834 | 2021-01-19T05:04:15.000Z | 2022-03-31T19:34:38.000Z | packages/client/docs/modules/_logging_.md | meaze0507/ethereumjs-monorepo | 873393983de08a2caf2659e09742f10065b04ac8 | [
"MIT"
] | 925 | 2016-01-05T22:30:04.000Z | 2021-01-18T10:16:12.000Z | packages/client/docs/modules/_logging_.md | meaze0507/ethereumjs-monorepo | 873393983de08a2caf2659e09742f10065b04ac8 | [
"MIT"
] | 333 | 2015-12-15T03:23:56.000Z | 2021-01-18T14:08:23.000Z | [ethereumjs-client](../README.md) › ["logging"](_logging_.md)
# Module: "logging"
## Index
### Type aliases
* [Logger](_logging_.md#logger)
### Variables
* [defaultLogger](_logging_.md#const-defaultlogger)
### Functions
* [getLogger](_logging_.md#getlogger)
## Type aliases
### Logger
Ƭ **Logger**: *WinstonLogger*
*Defined in [lib/logging.ts:4](https://github.com/ethereumjs/ethereumjs-client/blob/master/lib/logging.ts#L4)*
## Variables
### `Const` defaultLogger
• **defaultLogger**: *Logger‹›* = getLogger({ loglevel: 'info' })
*Defined in [lib/logging.ts:56](https://github.com/ethereumjs/ethereumjs-client/blob/master/lib/logging.ts#L56)*
## Functions
### getLogger
▸ **getLogger**(`options`: object): *Logger‹›*
*Defined in [lib/logging.ts:39](https://github.com/ethereumjs/ethereumjs-client/blob/master/lib/logging.ts#L39)*
**Parameters:**
▪`Default value` **options**: *object*= { loglevel: 'info' }
Name | Type | Default |
------ | ------ | ------ |
`loglevel` | string | "info" |
**Returns:** *Logger‹›*
| 20.019231 | 112 | 0.660903 | yue_Hant | 0.814373 |
380bd6cd99018b50f3560f522b94265319a82735 | 559 | md | Markdown | music/README.md | zsptsf/uniapp-tools | e04e87a58230321516d07cd1f42a6d5c1b741fff | [
"MIT"
] | 213 | 2019-06-03T02:55:41.000Z | 2021-12-15T06:24:33.000Z | music/README.md | minkaihui/uniapp-tools | 410bfd225abf721a55dfae38622b580f74d2aeee | [
"MIT"
] | 10 | 2019-07-29T10:28:37.000Z | 2020-01-13T09:54:46.000Z | music/README.md | minkaihui/uniapp-tools | 410bfd225abf721a55dfae38622b580f74d2aeee | [
"MIT"
] | 104 | 2019-06-21T10:21:01.000Z | 2022-01-28T06:49:03.000Z | # pocky-music
<img src="https://img.shields.io/badge/version-1.0.1-blue.svg?cacheSeconds=2592000" /><br />
## 介绍
<font color="red">**推荐先下载示例项目**</font><br />
特点:
- 根据简谱自定义实现播放歌曲
## 更新日志
[https://ext.dcloud.net.cn/plugin?id=704&update_log](https://ext.dcloud.net.cn/plugin?id=704&update_log)
## 文档和 demo
[https://www.yuque.com/pocky/aaeyux/gflqlz](https://www.yuque.com/pocky/aaeyux/gflqlz)
## dcloud 商店
[https://ext.dcloud.net.cn/plugin?id=704](https://ext.dcloud.net.cn/plugin?id=704)
## 最后
能不能给个五星好评支持一下 👍👍👍<br />
能不能给 **github** 点个**star**👍👍👍
| 19.275862 | 104 | 0.676208 | yue_Hant | 0.736045 |
380c8f091bbf1533ed82321b3161163c8eb82b8e | 283 | md | Markdown | pacmanBot/README.md | bayoumi17m/bots | 0a450df6b516019b28c2a66339628cb8cc63a4c0 | [
"MIT"
] | null | null | null | pacmanBot/README.md | bayoumi17m/bots | 0a450df6b516019b28c2a66339628cb8cc63a4c0 | [
"MIT"
] | null | null | null | pacmanBot/README.md | bayoumi17m/bots | 0a450df6b516019b28c2a66339628cb8cc63a4c0 | [
"MIT"
] | null | null | null | # Pacman Bot
This is our first set of bots that we will be playing around with to get practice working with an AI and connecting the AI to the field of play.
## Introduction
## Demos
### Demo 1: (List Agent Type)
Place GIF here
## Dependencies
## Running the Bots
## Credits
| 15.722222 | 144 | 0.717314 | eng_Latn | 0.998867 |
380c943d6411f1f65b09fa9eef8522a9c0a723e7 | 359 | md | Markdown | readme.md | saurabhsharma/Friday | 34cf166551c0546b99355fb9aaba79ca10dab515 | [
"MIT"
] | null | null | null | readme.md | saurabhsharma/Friday | 34cf166551c0546b99355fb9aaba79ca10dab515 | [
"MIT"
] | null | null | null | readme.md | saurabhsharma/Friday | 34cf166551c0546b99355fb9aaba79ca10dab515 | [
"MIT"
] | null | null | null |
## About Friday
Friday is a web application to distribute iOS/Android application Over The Air.
Work is in progress, stay tune!
## Using Friday
Wiki URL will go here
## Contributing
Thank you for considering contributing to the Friday!
## License
Friday is open-sourced software licensed under the [MIT license](http://opensource.org/licenses/MIT).
| 18.894737 | 101 | 0.763231 | eng_Latn | 0.992044 |
380dc127cef697aeca8f08539123e799fbb3df97 | 1,921 | md | Markdown | source/_posts/coronavirus_uk_announces_deaths_for_thursday.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | null | null | null | source/_posts/coronavirus_uk_announces_deaths_for_thursday.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | null | null | null | source/_posts/coronavirus_uk_announces_deaths_for_thursday.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | 2 | 2021-09-18T12:06:26.000Z | 2021-11-14T15:17:34.000Z | ---
extends: _layouts.post
section: content
image: https://i.dailymail.co.uk/1s/2020/10/01/14/33858528-0-image-a-40_1601558067994.jpg
title: Coronavirus UK announces deaths for Thursday
description: NHS England confirmed a further 44 people had died in its hospitals, along with six in Wales, three in Scotland and two in Northern Ireland. A full update will be published this afternoon.
date: 2020-10-01-14-38-16
categories: [latest, news]
featured: true
---
The UK has announced another 55 deaths from Covid-19 today in its early count.
Another 44 people were confirmed to have died in NHS England hospitals, along with six in Wales, three in Scotland and two in Northern Ireland.
A full update will be published by the Department of Health later today.
The early count comes as the average number of daily deaths is rising in the UK after swooping to a low of just seven per day a month ago, with there now a daily average of 40.
But in a shred of hopeful news, data now suggests that the surging numbers of cases which have rattled the nation in recent weeks appear to be slowing down.
Estimates from King's College London's Covid Symptom Study suggest that the rise in daily new cases is only 23 per cent higher than last week after it more than doubled in the week before.
And the Government-funded REACT-1 study, carried out by Imperial College London, said there were signs that the R rate has fallen to around 1.1 now, from 1.7 in September, and that cases are now rising less steeply than they were a few weeks ago.
NHS England said the patients who had died in its hospitals were aged between 60 and 99 and succumbed to the virus between September 18 and September 30.
Hospitals in the North West accounted for the single largest number – 15 – along with eight in the North East, six apiece in London and the Midlands, five in the South East, four in the East and none in the South West.
| 58.212121 | 246 | 0.782405 | eng_Latn | 0.999925 |
380e2b915122eb0e9044c06a1c507d806caade5c | 200 | md | Markdown | _wikis/1_argot/Characters/Nestor Domorosi.md | francofaa/digital-garden-jekyll-template | 9ae29425d5d3455026cef1954b113c9981869eb4 | [
"MIT"
] | null | null | null | _wikis/1_argot/Characters/Nestor Domorosi.md | francofaa/digital-garden-jekyll-template | 9ae29425d5d3455026cef1954b113c9981869eb4 | [
"MIT"
] | null | null | null | _wikis/1_argot/Characters/Nestor Domorosi.md | francofaa/digital-garden-jekyll-template | 9ae29425d5d3455026cef1954b113c9981869eb4 | [
"MIT"
] | null | null | null | ---
title: Nestor Domorosi
---
The city attorney of [[Argot]]. Prosecuted [[The Argotnauts]]. Rival of [[Armand Colodrise]], [[Hespero]], and [[Leandro Damaskenos]] for their flouting of Argoti laws. | 50 | 169 | 0.705 | eng_Latn | 0.957365 |
380e75cd804f751d3056ab34c18e828d8a816dbe | 1,686 | md | Markdown | labs/FacebookDemo/README.md | yangligeryang/codepath | b5aaaaa67615fb0f98ec25d3929c36d1754aca98 | [
"Apache-2.0"
] | null | null | null | labs/FacebookDemo/README.md | yangligeryang/codepath | b5aaaaa67615fb0f98ec25d3929c36d1754aca98 | [
"Apache-2.0"
] | null | null | null | labs/FacebookDemo/README.md | yangligeryang/codepath | b5aaaaa67615fb0f98ec25d3929c36d1754aca98 | [
"Apache-2.0"
] | null | null | null | ## Facebook
The purpose of this homework is to leverage animations and gestures to transition between screens. We're going to use the techniques from this week to implement some interactions in Facebook.
Time spent: `10`
### Features
#### Required
- [x] Tapping on a photo in the news feed should expand the photo full screen.
- [x] Tapping the Done button should animate the photo back into its position in the news feed.
- [x] On scroll of the full screen photo, the background should start to become transparent, revealing the feed.
- [x] If the user scrolls a large amount and releases, the full screen photo should dismiss.
#### Optional
- [x] The full screen photo should be zoomable.
- [ ] The user should be able to page through the other photos in full screen mode.
#### The following **additional** features are implemented:
- [ ] List anything else that you can get done to improve the app functionality!
Please list two areas of the assignment you'd like to **discuss further with your peers** during the next class (examples include better ways to implement something, how to extend your app in certain ways, etc):
1. Shifting frames between views was pretty difficult. Gave up in the end.
### Video Walkthrough
Here's a walkthrough of implemented user stories:
![screencap](https://github.com/yangligeryang/codepath/raw/master/labs/FacebookDemo/screencap.gif?raw=true)
GIF created with [LiceCap](http://www.cockos.com/licecap/).
## Notes
Shifting frames between views was pretty difficult. Gave up in the end on the small animation bugs. Zooming is also a little buggy but don't have time to fix it. Might return to it later, as well as the photo paging.
| 42.15 | 216 | 0.762752 | eng_Latn | 0.998834 |
380ec35c717d45fd1beb96ec3aaa71c7d0e3d265 | 452 | md | Markdown | github/add/collaborator/README.md | antoniofilhozup/ritchie-formulas | 3c0317b8c2bc1abe0d1933fdc9b560e8786d4c81 | [
"Apache-2.0"
] | 107 | 2020-04-16T20:12:00.000Z | 2022-03-04T23:40:08.000Z | github/add/collaborator/README.md | antoniofilhozup/ritchie-formulas | 3c0317b8c2bc1abe0d1933fdc9b560e8786d4c81 | [
"Apache-2.0"
] | 147 | 2020-04-14T13:20:49.000Z | 2022-01-17T14:35:23.000Z | github/add/collaborator/README.md | antoniofilhozup/ritchie-formulas | 3c0317b8c2bc1abe0d1933fdc9b560e8786d4c81 | [
"Apache-2.0"
] | 97 | 2020-04-14T13:35:11.000Z | 2022-01-05T19:01:13.000Z | # Description
This formula allows adding a new collaborator by typing only two parameters.
(collaborator username and repository name)
## Command
```bash
rit github add collaborator
```
## Requirements
- [Git Installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
- Github Account
## Demonstration
![Example](https://github.com/ZupIT/ritchie-formulas/raw/master/github/add/collaborator/src/docs/github.gif)
| 22.6 | 109 | 0.736726 | eng_Latn | 0.75053 |
380f2ec522b7b5c66392d05cf9fc8a3d95faaca2 | 244 | md | Markdown | README.md | Abejrijwi/aim-it | 970b030618067944b2e8bfc775236b8faa7fd44b | [
"MIT"
] | null | null | null | README.md | Abejrijwi/aim-it | 970b030618067944b2e8bfc775236b8faa7fd44b | [
"MIT"
] | null | null | null | README.md | Abejrijwi/aim-it | 970b030618067944b2e8bfc775236b8faa7fd44b | [
"MIT"
] | null | null | null | # aim-it
Asian Institute of Management & Information Technology (AIMIT) is one of the best Knowledge Resources Centre for professional & vocational courses different mode like part time, full time and distance to the students of Bihar.
| 81.333333 | 234 | 0.807377 | eng_Latn | 0.994475 |
380fe0249b5401843bfcc35781cd2c569e954927 | 57 | md | Markdown | README.md | manoffna/java-net-course | b83970d604309171cef63c292218158274d3b3d6 | [
"MIT"
] | null | null | null | README.md | manoffna/java-net-course | b83970d604309171cef63c292218158274d3b3d6 | [
"MIT"
] | null | null | null | README.md | manoffna/java-net-course | b83970d604309171cef63c292218158274d3b3d6 | [
"MIT"
] | null | null | null | # java-net-course
# Hello! It's my forst pull-request!!!
| 19 | 38 | 0.684211 | eng_Latn | 0.450566 |
381091664152ed1f0dc51d4e7aab8e02ce2a0f13 | 1,148 | md | Markdown | README.md | jose-mariano/cadastro-de-pessoas | 837a6dfddbc4ed848902a7428bd2fa4ce37e1d25 | [
"MIT"
] | null | null | null | README.md | jose-mariano/cadastro-de-pessoas | 837a6dfddbc4ed848902a7428bd2fa4ce37e1d25 | [
"MIT"
] | null | null | null | README.md | jose-mariano/cadastro-de-pessoas | 837a6dfddbc4ed848902a7428bd2fa4ce37e1d25 | [
"MIT"
] | null | null | null | # Cadastro de Pessoas com Python
Como o próprio nome já diz esse é um sistema para cadastrar pessoas, feito com python3.
Esse sistema é bem simples, a ideia é um sistema para cadastrar e visualizar as pessoas cadastradas. Nesse projeto os dados ficam salvos em um banco sqlite. O sistema ainda permite que o usúario escolha qual tipo de interface deseja utilizar, por padrão o sistema funciona com interface gráfica, mas basta adicionar CLI como argumento no momento de executar o programa e ele automaticamente irá inicializar o command line.
### Requisitos
Nesse projeto não utilizo nenhuma biblioteca além das padrões do python. Apenas certifique-se que possuí a biblioteca tkinter, para conseguir executar no modo GUI. No linux a biblioteca tkinter não vem por padrão como no windows e no mac. Para instalar essa biblioteca utilize o seguinte comando:
`sudo apt-get install python3-tk`
### Executando
Como dito acima, podemos utilizar duas interfaces diferentes, o modo CLI e o modo GUI (padrão). Abaixo segue como executar de cada maneira:
Usando modo CLI:
`python3 run.py CLI`
Usando modo GUI:
`python3 run.py GUI` ou `python3 run.py`
| 57.4 | 422 | 0.797038 | por_Latn | 0.999623 |
38109998a222846ff96f08dcdca9969bb9e7db4b | 21 | md | Markdown | README.md | doggie007/doggie007.github.io | 1a8d4e6901c19921ec6d31b98cc60d02bc5a74c4 | [
"CC-BY-3.0"
] | null | null | null | README.md | doggie007/doggie007.github.io | 1a8d4e6901c19921ec6d31b98cc60d02bc5a74c4 | [
"CC-BY-3.0"
] | null | null | null | README.md | doggie007/doggie007.github.io | 1a8d4e6901c19921ec6d31b98cc60d02bc5a74c4 | [
"CC-BY-3.0"
] | null | null | null | # doggie007.github.io | 21 | 21 | 0.809524 | swe_Latn | 0.227201 |
f700cf3fd0b0be7d4db11cb9dc195893262f9f17 | 2,597 | md | Markdown | docs/framework/winforms/controls/how-to-define-z-ordering-of-docked-toolstrip-controls.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-06T07:30:24.000Z | 2020-01-06T07:30:24.000Z | docs/framework/winforms/controls/how-to-define-z-ordering-of-docked-toolstrip-controls.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/controls/how-to-define-z-ordering-of-docked-toolstrip-controls.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Nasıl yapılır: Yerleştirilmiş ToolStrip Denetimlerinin Z Sıralamasını Tanımlama'
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- ToolStrip control [Windows Forms]
- MenuStrip control [Windows Forms]
- toolbars [Windows Forms], specifying z-order
- z-order
ms.assetid: 8b595429-ba9f-46af-9c55-3d5cc53f7fff
ms.openlocfilehash: 514c9dd1c91adcadf6f5d383ba734886dec3151d
ms.sourcegitcommit: c7a7e1468bf0fa7f7065de951d60dfc8d5ba89f5
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 05/14/2019
ms.locfileid: "65591911"
---
# <a name="how-to-define-z-ordering-of-docked-toolstrip-controls"></a>Nasıl yapılır: Yerleştirilmiş ToolStrip Denetimlerinin Z Sıralamasını Tanımlama
Konumuna bir <xref:System.Windows.Forms.ToolStrip> denetimi doğru şekilde yerleştirme ile denetim doğru formun z düzeninde hizalamanız gerekir.
## <a name="example"></a>Örnek
Aşağıdaki kod örneğinde nasıl düzenleneceğini gösterir bir <xref:System.Windows.Forms.ToolStrip> denetimi ve bir yerleşik <xref:System.Windows.Forms.MenuStrip> z düzenini belirterek denetimi.
[!code-csharp[System.Windows.Forms.ToolStrip.Misc#21](~/samples/snippets/csharp/VS_Snippets_Winforms/System.Windows.Forms.ToolStrip.Misc/CS/Program.cs#21)]
[!code-vb[System.Windows.Forms.ToolStrip.Misc#21](~/samples/snippets/visualbasic/VS_Snippets_Winforms/System.Windows.Forms.ToolStrip.Misc/VB/Program.vb#21)]
Z düzenini sıralarına göre belirlenir <xref:System.Windows.Forms.ToolStrip> ve <xref:System.Windows.Forms.MenuStrip>
denetimler, formun eklenir <xref:System.Windows.Forms.Control.Controls%2A> koleksiyonu.
[!code-csharp[System.Windows.Forms.ToolStrip.Misc#23](~/samples/snippets/csharp/VS_Snippets_Winforms/System.Windows.Forms.ToolStrip.Misc/CS/Program.cs#23)]
[!code-vb[System.Windows.Forms.ToolStrip.Misc#23](~/samples/snippets/visualbasic/VS_Snippets_Winforms/System.Windows.Forms.ToolStrip.Misc/VB/Program.vb#23)]
Bu çağrılar için sırasını ters <xref:System.Windows.Forms.Control.ControlCollection.Add%2A> yöntemi ve görünüm düzenini üzerindeki etkisi.
## <a name="compiling-the-code"></a>Kod Derleniyor
Bu örnek gerektirir:
- System.Windows.Forms System.Design ve System.Drawing derlemelerine başvurular.
## <a name="see-also"></a>Ayrıca bkz.
- <xref:System.Windows.Forms.MenuStrip>
- <xref:System.Windows.Forms.ToolStrip>
- <xref:System.Windows.Forms.Control.ControlCollection.Add%2A>
- <xref:System.Windows.Forms.Control.Controls%2A>
- <xref:System.Windows.Forms.Control.Dock%2A>
- [ToolStrip Denetimi](toolstrip-control-windows-forms.md)
| 50.921569 | 194 | 0.792453 | tur_Latn | 0.662542 |
f7010afc8b48548f8908ec750a3f79acd53747bb | 1,386 | md | Markdown | TSLTabooKopt/justificacion.md | petrusboniatus/TSL | 871210201ef4e9dc9ecd8834adc430d07dce7422 | [
"Apache-2.0"
] | null | null | null | TSLTabooKopt/justificacion.md | petrusboniatus/TSL | 871210201ef4e9dc9ecd8834adc430d07dce7422 | [
"Apache-2.0"
] | null | null | null | TSLTabooKopt/justificacion.md | petrusboniatus/TSL | 871210201ef4e9dc9ecd8834adc430d07dce7422 | [
"Apache-2.0"
] | null | null | null | # Justificación:
1. Inicialización greedy: va cogiendo el camino más corto, sirve para coger
una mejor solución inicial.
2. Cambio de operador: el operador definido por la tupla (i,j) significa, además
de la inversión de los nodos i e j, la reversión del resto de los nodos entre
i e j. Esto genera unos venimos más "coherentes" y permite un mayor cambio.
3. Se crean una estrategia de reinicialización por diversificación. En la que
se generan vecinos mucho más lejanos (cambiando aleatoriamente la mitad del
camino) y se coger el mejor de los generados aleatoriamente ponderando:
- La distancia del vecino
- Lo repetidos que están los nodos en la matriz de frecuencias.
La formula final de distancia es siguiente:
```
let freq_cost = self.freq_mat.get_solution_freq_cost(&new_vec);
let final_cost = self.calculate_cost(&new_vec) + freq_cost * delta_cost
* REPETITION_CONST;
```
Donde la función get_solution_freq_cost calcula el costo de todos los nodos de
la matriz de frecuencias de la siguiente forma: Suma los costes individuales
de cada nodo de la solución, que a su vez se calcula como el cociente entre
la frecuencia del nodo y la frecuencia máxima.
Esta estrategia permite que se visiten nodos que nunca se han visitado y a la
vez encontrar soluciones lo suficientemente buenas como para tener potencial
de mejorar la actual antes del siguiente reinicio
| 51.333333 | 80 | 0.790765 | spa_Latn | 0.997183 |
f70232beed809feb6d16dea89629518cc37e41af | 274 | md | Markdown | CONTRIB.md | allanclempe/solidity-contracts | 7fa9f3c825bea2096facbcc5a8f42372a99b90e0 | [
"MIT"
] | 5 | 2021-12-22T13:41:25.000Z | 2022-02-14T04:50:32.000Z | CONTRIB.md | allanclempe/solidity-contracts | 7fa9f3c825bea2096facbcc5a8f42372a99b90e0 | [
"MIT"
] | null | null | null | CONTRIB.md | allanclempe/solidity-contracts | 7fa9f3c825bea2096facbcc5a8f42372a99b90e0 | [
"MIT"
] | 1 | 2022-01-22T03:12:34.000Z | 2022-01-22T03:12:34.000Z | ### Dependencies
```console
$ npm i -g solc ganache-cli truffle
$ npm i
```
### Run ganache emulator
```console
$ npm run ganache
```
### Run contract migration
```console
$ npm run migrate
```
### Run unit tests
```console
$ npm run test
```
| 10.96 | 36 | 0.569343 | eng_Latn | 0.229165 |
f70242cb55ac2648912ba5f2331da230d3f68ccf | 27,469 | md | Markdown | fabric/24917-25463/25305.md | hyperledger-gerrit-archive/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 2 | 2021-11-08T08:06:48.000Z | 2021-12-03T01:51:44.000Z | fabric/24917-25463/25305.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | null | null | null | fabric/24917-25463/25305.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 4 | 2019-12-07T05:54:26.000Z | 2020-06-04T02:29:43.000Z | <strong>Project</strong>: fabric<br><strong>Branch</strong>: master<br><strong>ID</strong>: 25305<br><strong>Subject</strong>: FAB-11520 Add implementation for ChaincodeInstall<br><strong>Status</strong>: MERGED<br><strong>Owner</strong>: Jason Yellick - [email protected]<br><strong>Assignee</strong>:<br><strong>Created</strong>: 8/8/2018, 3:39:10 PM<br><strong>LastUpdated</strong>: 8/23/2018, 2:01:07 AM<br><strong>CommitMessage</strong>:<br><pre>FAB-11520 Add implementation for ChaincodeInstall
This CR provides a basic implementation of a new ChaincodeInstall to be
utilized by the new lifecycle SCC in a future CR.
Makes #Done FAB-11520
Change-Id: Ie8efa507fd94c075bf1f1ae0f0b214f5936538a6
Signed-off-by: Jason Yellick <[email protected]>
</pre><h1>Comments</h1><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 8/8/2018, 3:39:10 PM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/8/2018, 3:41:28 PM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3700/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/8/2018, 3:41:49 PM<br><strong>Message</strong>: <pre>Patch Set 1:
Starting verify build</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/8/2018, 3:41:55 PM<br><strong>Message</strong>: <pre>Patch Set 1: F1-VerifyBuild-1
code checks are failed</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/8/2018, 3:42:22 PM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Failed
https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3700/ : FAILURE (skipped)
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3700/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-build-checks-x86_64/3700</pre><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 8/9/2018, 11:47:02 PM<br><strong>Message</strong>: <pre>Uploaded patch set 2.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/9/2018, 11:50:16 PM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3769/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/9/2018, 11:50:35 PM<br><strong>Message</strong>: <pre>Patch Set 2:
Starting verify build</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/9/2018, 11:58:11 PM<br><strong>Message</strong>: <pre>Patch Set 2: F2-DocBuild+1 F1-VerifyBuild+1
Succeeded, Run SmokeTest</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/9/2018, 11:58:36 PM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Successful
https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3769/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-build-checks-x86_64/3769</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:01:05 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/fabric-smoke-tests-x86_64/2452/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:01:26 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Starting smoke tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:15:41 AM<br><strong>Message</strong>: <pre>Patch Set 2: F2-SmokeTest+1
Succeeded, Run UnitTest, Run IntegrationTest</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:18:36 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-unit-tests-x86_64/3552/ (2/3)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:18:57 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Starting unit tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:18:59 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/1162/ (3/3)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:19:36 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Starting Integration tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:34:00 AM<br><strong>Message</strong>: <pre>Patch Set 2: F3-IntegrationTest+1
Succeeded</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:34:48 AM<br><strong>Message</strong>: <pre>Patch Set 2: F3-UnitTest+1
Succeeded</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/10/2018, 12:35:17 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Successful
https://jenkins.hyperledger.org/job/fabric-smoke-tests-x86_64/2452/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-smoke-tests-x86_64/2452
https://jenkins.hyperledger.org/job/fabric-verify-unit-tests-x86_64/3552/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-unit-tests-x86_64/3552
https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/1162/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-integration-tests-x86_64/1162</pre><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 12:29:30 PM<br><strong>Message</strong>: <pre>Uploaded patch set 3.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 12:31:31 PM<br><strong>Message</strong>: <pre>Patch Set 3:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3925/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 12:31:50 PM<br><strong>Message</strong>: <pre>Patch Set 3:
Starting verify build</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 12:33:42 PM<br><strong>Message</strong>: <pre>Patch Set 3: F1-VerifyBuild-1
code checks are failed</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 12:34:05 PM<br><strong>Message</strong>: <pre>Patch Set 3:
Build Failed
https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3925/ : FAILURE (skipped)
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3925/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-build-checks-x86_64/3925</pre><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:11:15 PM<br><strong>Message</strong>: <pre>Patch Set 4: Patch Set 3 was rebased</pre><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:12:21 PM<br><strong>Message</strong>: <pre>Patch Set 4:
VerifyBuild</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:16:55 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3934/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:17:13 PM<br><strong>Message</strong>: <pre>Patch Set 4: -F1-VerifyBuild
Starting verify build</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:23:15 PM<br><strong>Message</strong>: <pre>Patch Set 4: F2-DocBuild+1 F1-VerifyBuild+1
Succeeded, Run SmokeTest</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:23:21 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Started https://jenkins.hyperledger.org/job/fabric-smoke-tests-x86_64/2562/ (2/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:23:48 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Starting smoke tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:36:48 PM<br><strong>Message</strong>: <pre>Patch Set 4: F2-SmokeTest+1
Succeeded, Run UnitTest, Run IntegrationTest</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:42:16 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-unit-tests-x86_64/3681/ (3/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:42:40 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Starting unit tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:45:16 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/1272/ (4/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:46:12 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Starting Integration tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 1:58:51 PM<br><strong>Message</strong>: <pre>Patch Set 4: F3-UnitTest+1
Succeeded</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 2:06:29 PM<br><strong>Message</strong>: <pre>Patch Set 4: F3-IntegrationTest-1
Failed</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 2:07:00 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Failed
https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/1272/ : FAILURE (skipped)
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/1272/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-integration-tests-x86_64/1272
https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/3934/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-build-checks-x86_64/3934
https://jenkins.hyperledger.org/job/fabric-smoke-tests-x86_64/2562/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-smoke-tests-x86_64/2562
https://jenkins.hyperledger.org/job/fabric-verify-unit-tests-x86_64/3681/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-unit-tests-x86_64/3681</pre><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 4:37:48 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Run IntegrationTest</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 4:39:31 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/1275/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 4:40:09 PM<br><strong>Message</strong>: <pre>Patch Set 4: -F3-IntegrationTest
Starting Integration tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 4:55:44 PM<br><strong>Message</strong>: <pre>Patch Set 4: F3-IntegrationTest+1
Succeeded</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/14/2018, 4:56:07 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Successful
https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/1275/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-integration-tests-x86_64/1275</pre><strong>Reviewer</strong>: Gari Singh - [email protected]<br><strong>Reviewed</strong>: 8/15/2018, 6:15:35 AM<br><strong>Message</strong>: <pre>Patch Set 4:
Even though this will end up as "SCC", my assumption is that we don't actually want this replaced and that we don't want this code called outside from outside this package? Is so, perhaps we should consider moving this into an "internal" package?</pre><strong>Reviewer</strong>: Gari Singh - [email protected]<br><strong>Reviewed</strong>: 8/15/2018, 6:16:33 AM<br><strong>Message</strong>: <pre>Patch Set 4: Code-Review+1</pre><strong>Reviewed</strong>: 8/15/2018, 9:26:09 AM<br><strong>Message</strong>: <pre>Patch Set 4: Code-Review+1</pre><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 8/16/2018, 12:06:43 PM<br><strong>Message</strong>: <pre>Uploaded patch set 5: Patch Set 4 was rebased.</pre><strong>Reviewer</strong>: Gari Singh - [email protected]<br><strong>Reviewed</strong>: 8/17/2018, 4:28:25 AM<br><strong>Message</strong>: <pre>Patch Set 5: Code-Review+2</pre><strong>Reviewer</strong>: Will Lahti - [email protected]<br><strong>Reviewed</strong>: 8/20/2018, 11:13:10 AM<br><strong>Message</strong>: <pre>Patch Set 5: Code-Review+1
(1 comment)
One nit which we can always address later. Looks good otherwise.</pre><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 8/22/2018, 3:28:55 PM<br><strong>Message</strong>: <pre>Patch Set 6: Patch Set 5 was rebased</pre><strong>Reviewer</strong>: Matthew Sykes - [email protected]<br><strong>Reviewed</strong>: 8/22/2018, 3:58:07 PM<br><strong>Message</strong>: <pre>Patch Set 6: Code-Review+1</pre><strong>Reviewer</strong>: Srinivasan Muralidharan - [email protected]<br><strong>Reviewed</strong>: 8/22/2018, 6:58:26 PM<br><strong>Message</strong>: <pre>Patch Set 6: Code-Review+2</pre><strong>Reviewer</strong>: Jonathan Levi (HACERA) - [email protected]<br><strong>Reviewed</strong>: 8/23/2018, 1:18:45 AM<br><strong>Message</strong>: <pre>Patch Set 6: Code-Review+2</pre><strong>Reviewer</strong>: Jonathan Levi (HACERA) - [email protected]<br><strong>Reviewed</strong>: 8/23/2018, 1:19:01 AM<br><strong>Message</strong>: <pre>Change has been successfully merged by Jonathan Levi (HACERA)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/23/2018, 1:22:02 AM<br><strong>Message</strong>: <pre>Patch Set 6:
Build Started https://jenkins.hyperledger.org/job/fabric-merge-x86_64/4353/ (1/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/23/2018, 1:22:05 AM<br><strong>Message</strong>: <pre>Patch Set 6:
Build Started https://jenkins.hyperledger.org/job/fabric-merge-end-2-end-x86_64/3023/ (2/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 8/23/2018, 2:01:07 AM<br><strong>Message</strong>: <pre>Patch Set 6:
Build Successful
https://jenkins.hyperledger.org/job/fabric-merge-x86_64/4353/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-merge-x86_64/4353
https://jenkins.hyperledger.org/job/fabric-merge-end-2-end-x86_64/3023/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-merge-end-2-end-x86_64/3023</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Jason Yellick - [email protected]<br><strong>Uploader</strong>: Jason Yellick - [email protected]<br><strong>Created</strong>: 8/8/2018, 3:39:10 PM<br><strong>UnmergedRevision</strong>: [19dbdf43b52825da53d73c9b9cb563196fb31cfa](https://github.com/hyperledger-gerrit-archive/fabric/commit/19dbdf43b52825da53d73c9b9cb563196fb31cfa)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/8/2018, 3:41:55 PM<br><strong>Type</strong>: F1-VerifyBuild<br><strong>Value</strong>: -1<br><br></blockquote><h3>PatchSet Number: 2</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Jason Yellick - [email protected]<br><strong>Uploader</strong>: Jason Yellick - [email protected]<br><strong>Created</strong>: 8/9/2018, 11:47:02 PM<br><strong>UnmergedRevision</strong>: [11f50a57b09e2b3731d9972033078ab392e5483d](https://github.com/hyperledger-gerrit-archive/fabric/commit/11f50a57b09e2b3731d9972033078ab392e5483d)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/9/2018, 11:58:11 PM<br><strong>Type</strong>: F1-VerifyBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/9/2018, 11:58:11 PM<br><strong>Type</strong>: F2-DocBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/10/2018, 12:15:41 AM<br><strong>Type</strong>: F2-SmokeTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/10/2018, 12:34:00 AM<br><strong>Type</strong>: F3-IntegrationTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/10/2018, 12:34:48 AM<br><strong>Type</strong>: F3-UnitTest<br><strong>Value</strong>: 1<br><br></blockquote><h3>PatchSet Number: 3</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Jason Yellick - [email protected]<br><strong>Uploader</strong>: Jason Yellick - [email protected]<br><strong>Created</strong>: 8/14/2018, 12:29:30 PM<br><strong>UnmergedRevision</strong>: [87a6d1ba243a443e863422ca09f2dd49b3660199](https://github.com/hyperledger-gerrit-archive/fabric/commit/87a6d1ba243a443e863422ca09f2dd49b3660199)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 12:33:42 PM<br><strong>Type</strong>: F1-VerifyBuild<br><strong>Value</strong>: -1<br><br></blockquote><h3>PatchSet Number: 4</h3><blockquote><strong>Type</strong>: TRIVIAL_REBASE<br><strong>Author</strong>: Jason Yellick - [email protected]<br><strong>Uploader</strong>: Jason Yellick - [email protected]<br><strong>Created</strong>: 8/14/2018, 1:11:15 PM<br><strong>UnmergedRevision</strong>: [275630c151e186a1d8c87fa3f7e92f17a7174740](https://github.com/hyperledger-gerrit-archive/fabric/commit/275630c151e186a1d8c87fa3f7e92f17a7174740)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:23:15 PM<br><strong>Type</strong>: F1-VerifyBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:23:15 PM<br><strong>Type</strong>: F2-DocBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:36:48 PM<br><strong>Type</strong>: F2-SmokeTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 4:55:44 PM<br><strong>Type</strong>: F3-IntegrationTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:58:51 PM<br><strong>Type</strong>: F3-UnitTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Gari Singh - [email protected]<br><strong>Approved</strong>: 8/15/2018, 6:16:33 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>:<br><strong>Approved</strong>: 8/15/2018, 9:26:09 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br></blockquote><h3>PatchSet Number: 5</h3><blockquote><strong>Type</strong>: TRIVIAL_REBASE<br><strong>Author</strong>: Jason Yellick - [email protected]<br><strong>Uploader</strong>: Jason Yellick - [email protected]<br><strong>Created</strong>: 8/16/2018, 12:06:43 PM<br><strong>UnmergedRevision</strong>: [d055edab6440157a57bacaa6cecc8307ae0eb363](https://github.com/hyperledger-gerrit-archive/fabric/commit/d055edab6440157a57bacaa6cecc8307ae0eb363)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:23:15 PM<br><strong>Type</strong>: F1-VerifyBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:23:15 PM<br><strong>Type</strong>: F2-DocBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:36:48 PM<br><strong>Type</strong>: F2-SmokeTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 4:55:44 PM<br><strong>Type</strong>: F3-IntegrationTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:58:51 PM<br><strong>Type</strong>: F3-UnitTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Gari Singh - [email protected]<br><strong>Approved</strong>: 8/17/2018, 4:28:25 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Will Lahti - [email protected]<br><strong>Approved</strong>: 8/20/2018, 11:13:10 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>:<br><strong>Approved</strong>: 8/15/2018, 9:26:09 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><h2>Comments</h2><strong>Commenter</strong>: Will Lahti - [email protected]<br><strong>CommentLine</strong>: [core/chaincode/lifecycle/lifecycle.go#L20](https://github.com/hyperledger-gerrit-archive/fabric/blob/d055edab6440157a57bacaa6cecc8307ae0eb363/core/chaincode/lifecycle/lifecycle.go#L20)<br><strong>Comment</strong>: <pre>Comment missing.</pre></blockquote><h3>PatchSet Number: 6</h3><blockquote><strong>Type</strong>: TRIVIAL_REBASE<br><strong>Author</strong>: Jason Yellick - [email protected]<br><strong>Uploader</strong>: Jason Yellick - [email protected]<br><strong>Created</strong>: 8/22/2018, 3:28:55 PM<br><strong>GitHubMergedRevision</strong>: [d360f225c86810769716d88cac9b32a7d11070aa](https://github.com/hyperledger-gerrit-archive/fabric/commit/d360f225c86810769716d88cac9b32a7d11070aa)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:23:15 PM<br><strong>Type</strong>: F1-VerifyBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:23:15 PM<br><strong>Type</strong>: F2-DocBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:36:48 PM<br><strong>Type</strong>: F2-SmokeTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 4:55:44 PM<br><strong>Type</strong>: F3-IntegrationTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 8/14/2018, 1:58:51 PM<br><strong>Type</strong>: F3-UnitTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Srinivasan Muralidharan - [email protected]<br><strong>Approved</strong>: 8/22/2018, 6:58:26 PM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Jonathan Levi (HACERA) - [email protected]<br><strong>Approved</strong>: 8/23/2018, 1:18:45 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>MergedBy</strong>: Jonathan Levi (HACERA)<br><strong>Merged</strong>: 8/23/2018, 1:19:01 AM<br><br><strong>Approver</strong>: Gari Singh - [email protected]<br><strong>Approved</strong>: 8/17/2018, 4:28:25 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Will Lahti - [email protected]<br><strong>Approved</strong>: 8/20/2018, 11:13:10 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>:<br><strong>Approved</strong>: 8/15/2018, 9:26:09 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Matthew Sykes - [email protected]<br><strong>Approved</strong>: 8/22/2018, 3:58:07 PM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br></blockquote> | 169.561728 | 10,073 | 0.761731 | kor_Hang | 0.407131 |
f702d6622a9d39115d6bcd3d20ce03b190410160 | 37 | md | Markdown | README.md | synXero/Config-Public | 0d0f51e9ce0999148990c61377e327c8f2edbbb3 | [
"MIT"
] | null | null | null | README.md | synXero/Config-Public | 0d0f51e9ce0999148990c61377e327c8f2edbbb3 | [
"MIT"
] | null | null | null | README.md | synXero/Config-Public | 0d0f51e9ce0999148990c61377e327c8f2edbbb3 | [
"MIT"
] | null | null | null | # Config-Public
MAC OS Dev Env Files
| 12.333333 | 20 | 0.756757 | kor_Hang | 0.893545 |
f7031db93271357df9b31f918dc874071cf76214 | 553 | md | Markdown | _posts/2010-12-25-the-best-messiah-video-of-2010.md | millerj870/millerj870.github.io | 96d0ddc187a22443add0f1a67daf0690b1effae9 | [
"CC0-1.0"
] | null | null | null | _posts/2010-12-25-the-best-messiah-video-of-2010.md | millerj870/millerj870.github.io | 96d0ddc187a22443add0f1a67daf0690b1effae9 | [
"CC0-1.0"
] | null | null | null | _posts/2010-12-25-the-best-messiah-video-of-2010.md | millerj870/millerj870.github.io | 96d0ddc187a22443add0f1a67daf0690b1effae9 | [
"CC0-1.0"
] | null | null | null | ---
author: jason
date: 2010-12-25 21:27:30+00:00
layout: post
title: The Best 'Messiah' video of 2010
tags: fun
---
Here's the best holiday video I've seen this year:
<iframe src="http://www.youtube.com/embed/LyviyF-N23A?wmode=transparent" allowfullscreen frameborder="0" height="417" width="500"></iframe>
It looks like a classroom of students in an Alaskan town put this together creatively and thoughtfully. Viewers see a slice of winter life in the village, and the kids pull in quite a number of people to help 'sing' the 'Messiah' with them.
| 39.5 | 240 | 0.75226 | eng_Latn | 0.991763 |
f703ec5ebb54934762f96f77e83ea48450715328 | 359 | md | Markdown | _posts/2021-07-08/2021-06-20-First-thought-f-in-your-mind-20210620152353820596.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-20-First-thought-f-in-your-mind-20210620152353820596.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-20-First-thought-f-in-your-mind-20210620152353820596.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | ---
title: "First thought [f] in your mind"
metadate: "hide"
categories: [ Pussy ]
image: "https://preview.redd.it/tq49h1vebd671.jpg?auto=webp&s=1d9741894ec6e417fbdb6c52cd1ac2061b572764"
thumb: "https://preview.redd.it/tq49h1vebd671.jpg?width=1080&crop=smart&auto=webp&s=8456d602b56b7767b0e9a161d534ce1317a0d2b2"
visit: ""
---
First thought [f] in your mind
| 35.9 | 125 | 0.771588 | yue_Hant | 0.161429 |
f704cda53083517887de4e51e9a7813e8cf677ba | 1,967 | md | Markdown | _includes/calculator/_dummy/panel-3.md | frtib/tsp-redesign | 7d733182a4fa0900799f0f10bb096259113e7110 | [
"CC0-1.0"
] | 3 | 2018-07-06T20:46:03.000Z | 2021-02-04T13:38:42.000Z | _includes/calculator/_dummy/panel-3.md | frtib/tsp-redesign | 7d733182a4fa0900799f0f10bb096259113e7110 | [
"CC0-1.0"
] | 128 | 2018-07-25T21:16:13.000Z | 2021-07-29T00:26:11.000Z | _includes/calculator/_dummy/panel-3.md | frtib/tsp-redesign | 7d733182a4fa0900799f0f10bb096259113e7110 | [
"CC0-1.0"
] | 3 | 2019-02-07T22:23:01.000Z | 2021-03-30T21:14:53.000Z | {% comment %}
Results NAME panel (3) for CALC.
{% endcomment %}
{% assign panelID = include.panelID | default: 3 %}
{% assign hide = 'display: block;' %}
{% assign gridClass2 = include.gridClass2 | default: 'results' %}
{% if include.hide == 1 %} {% assign hide = 'display: none;' %} {% endif %}
<section id="panel-{{ panelID }}" class="calculator-panel" style="{{ hide }}" markdown="1">
{% include calculator/infoBox.html icon='info'
title="Maximizing Agency or Service Contributions"
textBlock="To receive the maximum Agency or Service Matching Contributions, you must contribute 5% of your basic pay each pay period."
%}
<div class="results-grid-frame" markdown="1">
{% include calculator/resultsRow.html rightID="deferral-limit" right=""
left="IRS Elective Deferral Limit for <span class='year-choosen'>YYYY</span>" %}
{% include calculator/resultsRow.html rightID="total-contributed" right=""
left="How much you will have contributed before your new amount is effective" %}
{% capture newAmountTextBlock %}
**Here’s the new amount you can contribute each remaining pay period if you
want to maximize your contributions for <span class='year-choosen'>YYYY</span>**
(rounded down to the nearest dollar).
To change how much you contribute, log into your payroll system and select the Thrift Savings Plan option. Common payroll systems include
[Direct Access]({{ site.baseurl }}/exit/?idx=47){:rel="nofollow"}, [Employee Express]({{ site.baseurl }}/exit/?idx=7){:rel="nofollow"}, EBIS, [LiteBlue]({{ site.baseurl }}/exit/?idx=8){:rel="nofollow"}, [myPay]({{ site.baseurl }}/exit/?idx=6){:rel="nofollow"}, and [NFC EPP]({{ site.baseurl }}/exit/?idx=9){:rel="nofollow"}.
{% endcapture %}
{% assign newAmountTextBlock = newAmountTextBlock | markdownify %}
{% include calculator/resultsRow.html rightID="new-contribution" right="" left=newAmountTextBlock %}
</div>
{% include calculator/button-block.html panelID=panelID revise=2 %}
</section>
| 46.833333 | 324 | 0.712761 | eng_Latn | 0.880504 |
f705e148c5ccc5bdcc4fda6205fb8f30039482fa | 224 | md | Markdown | _posts/2019-07-25-Replay.md | shuferhoo/shuferhoo.github.io | 7dc458dcf93e53be1a1916dca4e1cf0e959d1653 | [
"MIT"
] | null | null | null | _posts/2019-07-25-Replay.md | shuferhoo/shuferhoo.github.io | 7dc458dcf93e53be1a1916dca4e1cf0e959d1653 | [
"MIT"
] | null | null | null | _posts/2019-07-25-Replay.md | shuferhoo/shuferhoo.github.io | 7dc458dcf93e53be1a1916dca4e1cf0e959d1653 | [
"MIT"
] | null | null | null | ---
layout: post_layout
title: 20190723复盘
time: 2019年07月25日 星期四
location: 广州
pulished: true
excerpt_separator: "```"
---
大盘(国证A指)连续两天上涨了,昨天上涨1%,今天上涨0。7%。
------------------------------------------------------------------
| 16 | 66 | 0.504464 | eng_Latn | 0.173335 |
f7062b6531442823906161d316288b9a943f2583 | 619 | md | Markdown | README.md | piesome/valta | b61cd30a9cffdc7fe9742eff8661827d6681849f | [
"MIT"
] | 5 | 2017-05-14T11:20:55.000Z | 2019-12-16T23:51:45.000Z | README.md | piesome/valta | b61cd30a9cffdc7fe9742eff8661827d6681849f | [
"MIT"
] | 36 | 2017-04-14T10:14:32.000Z | 2017-05-14T19:39:24.000Z | README.md | piesome/valta | b61cd30a9cffdc7fe9742eff8661827d6681849f | [
"MIT"
] | 1 | 2022-02-15T23:35:43.000Z | 2022-02-15T23:35:43.000Z | <div align="center">
<img src="logo.png" />
<div><strong>Multiplayer 4X strategy game</strong></div>
</diV>
## dev start
```sh
yarn install
yarn data
./node_modules/.bin/knex migrate:latest --env index-server-development
./node_modules/.bin/knex migrate:latest --env game-server-development
# each in different shell
yarn index-server
yarn game-server:dev
yarn client:dev
```
## generating a terrain svg
```sh
./node_modules/.bin/ts-node src/Common/Util/HexSvg.ts -C '#ffffff' -c '#000000' > thing.svg
```
## license
All code and assets are licensed under the MIT License (Expat) unless otherwise noted.
| 21.344828 | 91 | 0.717286 | eng_Latn | 0.645991 |
f70637d9370e6b5afe5431534309a7a56dfcc9b4 | 3,634 | md | Markdown | README.md | hallerch/home-office-manager | 12ab76f2609d8125bd9dd4683369a251cbdfbe0b | [
"Apache-2.0"
] | null | null | null | README.md | hallerch/home-office-manager | 12ab76f2609d8125bd9dd4683369a251cbdfbe0b | [
"Apache-2.0"
] | null | null | null | README.md | hallerch/home-office-manager | 12ab76f2609d8125bd9dd4683369a251cbdfbe0b | [
"Apache-2.0"
] | null | null | null | # home-office-manager
Manage your home office day in an easy way
An excercise to explain the three **User Task Forms** from Camunda
___
### Java delegate
[Camunda Documentation: Delegation Code][1]
___
### Service task to Manual Task
Difference between [Manual Task][2] and [User Task][3]
___
### User task with user interface
#### Version 1 - Generated Task Forms
In Camunda Modeler click on a User Task Activity.
Click on the tab 'Forms' and add a 'Form Field'.
<img src="images/generated-task-form-1.png" width="500" alt="generated-task-form-1.png">
In Camunda go to the 'Tasklist' claim the user task and fill in the fields.
<img src="images/generated-task-form-2.png" width="500" alt="generated-task-form-2.png">
* [Camunda Documentation: Generated Task Forms][4]
#### Version 2 - Embedded Task Forms (Angular.js)
Install **AngularJS** and the **Camunda BPM JS SDK** as dependencies into your project using *bower*:
Install *bower* through NPM with `npm install -g bower`
For **AngularJS**:
```bash
bower install angular#1.2.32 --save
```
For **Camunda BPM JS SDK**:
```bash
bower install camunda-bpm-sdk-js --save
```
On the given *User Task* in the **Camunda Modeler**, set the following value into the *Form Key* field:
`embedded:app:forms/your-form.html`
![Embedded Task Form with AngularJS](images/embedded-task-form-angularjs-1.png)
Now you should be able to use AngularJS functions and more in your embedded form.
Example in *angularjs-form.html*
```html
<form role="form" name="form">
<label for="input-field">set to 'true' to buy some food</label><br>
<input type="text"
cam-variable-name="enoughFood"
cam-variable-type="Boolean"
ng-model="enoughFood"
id="input-field">
<p ng-show="enoughFood">
Your input: <b>{{ enoughFood }}</b> <br>
<sub>some sweet Two-Way Binding by AngularJS :o</sub>
</p>
</form>
```
[Camunda Documentation: Embedded Task Forms][5]
[Standalone Usage of JS SDK with AngularJS][6]
#### Version 3 - External Task Forms (Custom)
In Camunda Modeler click on a User Task Activity.
Click on the tab 'Forms' and add in the 'Form Key' field 'app:forms/external-form.html'.
<img src="images/external-task-forms-1.png" width="500" alt="external-task-forms-1.png">
Check and understand the file src/main/webapp/forms/embedded-form.html
<img src="images/external-task-forms-2.png" width="500" alt="external-task-forms-1.png">
In Camunda go to the 'Tasklist' claim the user task and click on 'Open external form'
<img src="images/external-task-forms-3.png" width="500" alt="external-task-forms-2.png">
[Camunda Documentation: External Task Forms][7]
___
### Literature and sources
* [Camunda Documentation: Delegation Code][1]
* [Camunda Documentation: Manual Task][2]
* [Camunda Documentation: User Task][3]
* [Camunda Documentation: Generated Task Forms][4]
* [Camunda Documentation: Embedded Task Forms][5]
* [Standalone Usage of JS SDK with AngularJS][6]
* [Camunda Documentation: External Task Forms][7]
[1]: https://docs.camunda.org/manual/latest/user-guide/process-engine/delegation-code/
[2]: https://docs.camunda.org/manual/7.8/reference/bpmn20/tasks/manual-task/
[3]: https://docs.camunda.org/manual/7.8/reference/bpmn20/tasks/user-task/
[4]: https://docs.camunda.org/manual/latest/user-guide/task-forms/#generated-task-forms
[5]: https://docs.camunda.org/manual/latest/user-guide/task-forms/#embedded-task-forms
[6]: https://github.com/camunda/camunda-bpm-examples/tree/master/sdk-js/browser-forms-angular
[7]: https://docs.camunda.org/manual/latest/user-guide/task-forms/#external-task-forms
| 34.283019 | 103 | 0.721519 | eng_Latn | 0.511215 |
f70883daa3cf0df31ba00428836c43e1ae7d8d39 | 290 | md | Markdown | README.md | geelaro/SimpleVolley | d00b918c2acba682a4534ba5214a5ff1ee085bd8 | [
"Apache-2.0"
] | null | null | null | README.md | geelaro/SimpleVolley | d00b918c2acba682a4534ba5214a5ff1ee085bd8 | [
"Apache-2.0"
] | null | null | null | README.md | geelaro/SimpleVolley | d00b918c2acba682a4534ba5214a5ff1ee085bd8 | [
"Apache-2.0"
] | null | null | null | # SimpleVolley
An Android Demo like Custom ListView Feed using Volley
This demo project is from [https://www.androidhive.info/2014/05/android-working-with-volley-library-1/](https://www.androidhive.info/2014/05/android-working-with-volley-library-1/).
It is a good sample to learn Volley.
| 48.333333 | 181 | 0.789655 | eng_Latn | 0.441276 |
f70930fe487d500d881cdd6a99f513514abb735b | 276 | md | Markdown | README.md | shiftinv/iOS-VPN-Autoconnect | 61b2c2d0fdfbc3adc79305aab8fbfb63b26e595c | [
"MIT"
] | null | null | null | README.md | shiftinv/iOS-VPN-Autoconnect | 61b2c2d0fdfbc3adc79305aab8fbfb63b26e595c | [
"MIT"
] | null | null | null | README.md | shiftinv/iOS-VPN-Autoconnect | 61b2c2d0fdfbc3adc79305aab8fbfb63b26e595c | [
"MIT"
] | null | null | null | iOS VPN auto-connect mobileconfig file generator.
Fill out the form. Hit download. Airdrop or email it to yourself.
Currently supports IPSec, support for other VPN types coming soon
Adapted from [klinquist's original code](https://github.com/klinquist/iOS-VPN-Autoconnect) | 39.428571 | 90 | 0.800725 | eng_Latn | 0.966385 |
f70c40934f2110dc4b337ebc5cf5304224207581 | 1,938 | md | Markdown | README.md | etki/splinter | e7ceebcb5caa000cb6c48023a6101499169a85c1 | [
"Apache-2.0"
] | null | null | null | README.md | etki/splinter | e7ceebcb5caa000cb6c48023a6101499169a85c1 | [
"Apache-2.0"
] | null | null | null | README.md | etki/splinter | e7ceebcb5caa000cb6c48023a6101499169a85c1 | [
"Apache-2.0"
] | null | null | null | splinter
========
![Splinter!](http://img3.wikia.nocookie.net/__cb20130921120031/protagonist/ru/images/f/f9/22519539.jpg)
# Использование
```bash
echo Yarrr! && mvn jetty:run
```
# Смена базы данных
Управление подключением к базе данных задается в двух файлах: `src/main/resources/database.default.properties` и
`src/main/resources/database.local.properties`. Первый файл предназначен для хранения настроек для проекта в целом,
второй - для управления локальными подключениями на машине разработчика (т.е. трогать стоит только второй).
Поддерживаются следующие свойства:
| Свойство | Описание |
|-------------|--------------------------------------------------------------------------------------------------------|
| db.driver | Тип базы данных. Разрешенные значения: `h2 / mysql` (по умолчанию `h2`). |
| db.location | Расположение базы данных. Разрешенные значения: `network / file / memory` для H2, `network` для MYSQL. |
| db.name | Имя базы данных (по умолчанию `splinter`). |
| db.path | Путь к директории с базой данных (используется только при драйвере H2 и `db.location = file/network`). |
| db.host | Имя хоста сервера базы данных (используется только при `db.location = network`). |
| db.port | Порт сервера базы данных (используется только при `db.location = network`). |
| db.user | Пользователь базы данных. |
| db.password | Пароль пользователя базы данных. |
Пример `database.local.properties`:
```
db.driver=mysql
db.location=network
db.user=root
db.password=insecureRootPassword
db.name=random_database
``` | 51 | 120 | 0.546956 | rus_Cyrl | 0.78556 |
f70c4543db4e5a4a7ffed99047d5cf5242b2c26a | 1,666 | md | Markdown | docs/vs-2015/debugger/debug-interface-access/idiaenumdebugstreams-clone.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/debug-interface-access/idiaenumdebugstreams-clone.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/debug-interface-access/idiaenumdebugstreams-clone.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Idiaenumdebugstreams::clone – | Dokumentace Microsoftu
ms.custom: ''
ms.date: 2018-06-30
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-debug
ms.tgt_pltfrm: ''
ms.topic: article
dev_langs:
- C++
helpviewer_keywords:
- IDiaEnumDebugStreams::Clone method
ms.assetid: e85ec592-de97-4f95-a774-1623315ba415
caps.latest.revision: 11
author: mikejo5000
ms.author: mikejo
manager: ghogen
ms.openlocfilehash: 7a6f62680cc0a7007ed4c66fa8e28e8286901a86
ms.sourcegitcommit: 55f7ce2d5d2e458e35c45787f1935b237ee5c9f8
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 08/22/2018
ms.locfileid: "42681970"
---
# <a name="idiaenumdebugstreamsclone"></a>IDiaEnumDebugStreams::Clone
[!INCLUDE[vs2017banner](../../includes/vs2017banner.md)]
Nejnovější verzi tohoto tématu můžete najít v [idiaenumdebugstreams::clone –](https://docs.microsoft.com/visualstudio/debugger/debug-interface-access/idiaenumdebugstreams-clone).
Vytvoří čítač, který obsahuje stejného stavu jako aktuální enumerátor výčtu.
## <a name="syntax"></a>Syntaxe
```cpp#
HRESULT Clone (
IDiaEnumDebugStreams** ppenum
);
```
#### <a name="parameters"></a>Parametry
`ppenum`
[out] Vrátí [idiaenumdebugstreams –](../../debugger/debug-interface-access/idiaenumdebugstreams.md) objekt, který obsahuje duplicitní čítače výčtu. Datové proudy se neduplikují, pouze enumerátor.
## <a name="return-value"></a>Návratová hodnota
Pokud je úspěšná, vrátí `S_OK`; v opačném případě vrátí kód chyby.
## <a name="see-also"></a>Viz také
[IDiaEnumDebugStreams](../../debugger/debug-interface-access/idiaenumdebugstreams.md)
| 30.290909 | 198 | 0.751501 | ces_Latn | 0.690033 |
f70c8501d8d29704394ee1cdeafeb66dcd93957d | 1,736 | md | Markdown | rust_icu_sys/bindgen/README.md | lacc97/rust_icu | e98ec139afb54f354482416262d60560689c64f1 | [
"Apache-2.0"
] | 210 | 2019-02-05T12:45:09.000Z | 2022-03-28T07:59:06.000Z | third_party/rust_crates/vendor/rust_icu_sys/bindgen/README.md | PlugFox/fuchsia | 39afe5230d41628b3c736a6e384393df954968c8 | [
"BSD-2-Clause"
] | 158 | 2019-11-14T05:03:45.000Z | 2022-03-29T22:33:41.000Z | third_party/rust_crates/vendor/rust_icu_sys/bindgen/README.md | PlugFox/fuchsia | 39afe5230d41628b3c736a6e384393df954968c8 | [
"BSD-2-Clause"
] | 73 | 2019-03-06T18:55:23.000Z | 2022-03-26T12:04:51.000Z | # This directory contains the output of bindgen-generated ICU bindings.
## run_bindgen.sh
This script is used to run bindgen manually, or out of band of the normal build
cycle of `rust_icu_sys`. This is useful for building `rust_icu` without the
`bindgen` feature on; which means that `bindgen` is not run during build, but
a pre-existing bindings file is used instead.
Of course, for that to work, there has to be something that generates the
one-off files. `run_bindgen.sh` is that something.
## I/O behavior
The input to the script are the headers from the Unicode directory. The list
of headers to examine is listed in the script itself, and you will find things
like `ucal`, `udat`, and others. The directory is auto-detected based on the
data that the program `icu-config` gives about the installation.
The output of the script is a rust library file containing the auto-generated
low-level bindings for the ICU library. The name of the output file depends
on the
## Dependencies
The script attempts to auto-detect its dependencies and will fail early if
one is not detected. The dependencies known so far are:
- bash
- icu-config (from ICU installations)
- bindgen (from rust tools)
- llvm-config (from the "llvm-dev" package)
- tr (from coreutils)
## Running the script.
The script is intended to be auto-piloted. Ideally it is invoked from a
Makefile target. For the time being two things are important here:
1. The list of headers and identifiers that need processing is set separately
from `build.rs` but shoudl be maintained to keep in sync.
2. Output directory is by default the current directory. It can be modified by
setting the environment variable `$OUTPUT_DIR` when starting the program.
| 37.73913 | 79 | 0.770737 | eng_Latn | 0.999845 |
f70c85a10893ce23c2fb7b80758a7ea44ee42c5c | 11,021 | md | Markdown | _posts/2016-11-18-DotVVM_alebo_ako_skrotit_LOB_aplikacie.md | zemacik/zemacik.github.io | 1b6f2da772bf6b22c538c4298b4935fcba652599 | [
"MIT"
] | null | null | null | _posts/2016-11-18-DotVVM_alebo_ako_skrotit_LOB_aplikacie.md | zemacik/zemacik.github.io | 1b6f2da772bf6b22c538c4298b4935fcba652599 | [
"MIT"
] | null | null | null | _posts/2016-11-18-DotVVM_alebo_ako_skrotit_LOB_aplikacie.md | zemacik/zemacik.github.io | 1b6f2da772bf6b22c538c4298b4935fcba652599 | [
"MIT"
] | null | null | null | ---
layout: blogpost
type: post
title: "DotVVM, alebo ako skrotiť line of business (LOB) aplikácie"
description: "DotVVM, alebo ako skrotiť line of business (LOB) aplikácie"
date: 2016-11-18
categories:
- Framework
tags: [DotVVM, LOB]
published: true
canshare: true
comments: true
---
Pred pár dňami som mal u nás vo firme prednášku o novom webovom frameworku pre ASP.NET - **DotVVM**. Keďže mám tému ešte v hlave bola by škoda ju nespísať na papier a podeliť sa.
## Aké máme dnes možnosti?
---
Z pohľadu .NET vývojára máme dnes v zásade tri cesty akými sa vieme vydať pri vývoji webovej aplikácie
### 1. ASP.NET WebForms
Môžeme použiť staré ASP.NET WebForms. WebForms nám ponúkali vývoj založený na stránkach a komponentoch. **Na vývoj LOB aplikácií to bolo ako stvorené.** Veľa programátorov koncept nepochopilo, akékoľvek rozvrstvenie aplikácií neexistovalo, a všetka biznis logika bola v samotných stránkach. Aplikácia nebola testovateľná, a po určitom čase bola neudržiavateľná.
Ďalší nešvár WebForms aplikácií bol následne **vygenerovaný HTML kód**, ktorý na dobu s pred 10 rokov možno stačil. Dnes keď potrebujeme aplikáciu cieliť na viac zariadení, obrazoviek, máme responzívny dizajn táto technológia je takmer nepoužiteľná.
Taktiež spojenie WebForms s javascriptom nebolo moc prívetivé.
A následne **VIEWSTATE**. Kapitola sama o sebe. Nepochopený koncept, kde mali načítané stránky aj niekoľko megabajtov, ktoré následne chodili medzi prehliadačom a serverom.
No a na záver, pre ASP.NET WebForm **nebude podpora v .NET CORE**.
<br/>
### 2. ASP.NET MVC, WEB API, a kopou ďalších JS knižníc
Ďalšou možnosťou je vývoj aplikácie v ASP.NET MVC, WEB API, a kopou ďalších JS knižníc.
Pri tomto spôsobe vývoja aplikácie, takmer **na všetko potrebujeme vytvárať vlastnú infraštruktúru.**
* či už **validácie** na klientskej, a serverovej strane. A stým spojené udržiavanie rovnakého spôsobu prezentácie výsledkov používateľovi.
* **preposielanie, a formátovanie dátumu a času**, je tiež pekný oriešok
* **lokalizácia** a následné preposlanie na klienta
Na všetko toto musíme myslieť pri vývoji aplikácie nad ASP.NET MVC
Taktiež potrebujeme veľa, veľa javascriptového kódu.
Ak chceme mať aspoň trochu dynamickú stránku, **na všetko potrebujeme javascriptové knižnice**, alebo si funkcionalitu napíšeme sami. Toto všetko udržiavať je celkom výzva.
<br/>
### 3. SPA Framework (Angular, React, Ember, Aurelia), + ASP.NET WEB API
No a tretia možnosť, dnes veľmi moderná, je vývoj SPA (Single page application) v niektorom z tento týždeň populárnych frameworkov, ako sú napríklad Angular JS, React, Vue.js, Aurelia poprípade Ember.js.
Pre mnohé firmy presedlať na tento spôsob vývoja môže znamenať vynaloženie značného úsilia, času a finančných prostriedkov do zaškolenia programátorov. Programátori si musia osvojiť veľa nových zručností, konceptov, a technológií.
Ale na rovinu. Nie je to úplne jednoduché sa vo svete javascriptu orientovať.
Ak berieme ohľad aj na udržiavanie existujúceho kódu, .NET platforma je relatívne stabilná, a vieme použiť aj 10 rokov starý projekt, urobiť v ňom zmenu, a opätovne nasadiť. Čo v JS svete môže byť problém, hlavne ak dnes použitý framework, bude zajtra, o týždeň, možno o rok nepodporovaný.
<br/>
![DotVVM Logo](/assets/posts/2016/20161118_02_Logo.png){: class="img img-responsive"}
## Na scénu prichádza DotVVM
---
Frustráciu s vývojom webových aplikácií nepociťujem len ja, ale aj ľudia z firmy [Riganti](http://www.riganti.cz/), konkrétne [Tomáš Herceg](https://twitter.com/hercegtomas). On a jeho tím sa rozhodli napísať svoj vlastný framework - DotVVM.
Investovali doň nemálo času. A ako on sám spomínal na jednej svojej prednáške, že keby ohodnotil čas ktorý frameworku vo firme venovali, tak by si mohol kúpiť Teslu. :)
DotVVM je Opensource framework pre ASP.NET.
Ako Tomáš sám o ňom na prednáškach hovorí, píšete v ňom "javascriptové aplikácie bez javascriptu". Ten javascript tam samozrejme je, len s ním tak často neprichádzame do styku, ako je to v skôr popísaných spôsoboch vývoja.
Perfektne sa hodí na vývoj Line of Business (LOB) aplikácií. Hlavne na data entry business aplikácie kde sa rieši kopec formulárov, ich vzťahy a pod.
Napríklad také aplikácie poisťovní, bánk, ... Vynikajúci príklad je taký formulár na poistenie PZP, zložitejší wizard, kde nasledujúca obrazovka zobrazuje polia podľa hodnôt stránky predchádzajúcej.
Nie je úplne vhodný na písanie bežných firemných prezentačných webov (aj keď aj to je možné), aplikácie kde sa veľký dôraz berie na množstvo animácií, a facny funkcionalitu.
DotVVM je založený na vzore MVVM (Model – View - ViewModel).
- ViewModel je napísaný v C#
- A Views sú V HTML s ďalšími vylepšeniami.
Na klientskej strane DotVVM **používa knižnicu Knockout.js**.
Vývoj v tomto frameworku nám uľahčuje aj rozšírenie do Visual Studia 2015.
Rozprávku o tom ako prebiehal vývoj tohto rozšírenia si môžete pozrieť na videu zo stránky wug.cz. Pozrite sa na to je to celkom zábavné.
<br/>
## Atribúty, Page Livecycle
---
### Binding
Jednotlivé properties vo viewmodeloch môžete odekorovať Binding atribútmi.
Týmito atribútmi poviete frameworku akým smerom má povoliť bindovanie z a do viewmodelu.
Možnosti ktoré dnes máme sú:
- **None** – Čiže bindovanie je zakázané, a ak príde daná property z klienta bude ignorovaná.
- **Both** – Default správanie, kde prebieha prenos dať medzi klientom a serverom
- **ClientToServer**, **ServerToClient** sú samo popisné
### Page Livecycle
Ďalšia vec čo by som chcel spomenúť je životný cyklus stránky.
![Page Livecycle](/assets/posts/2016/20161118_01_PageLiveCycle.png){: class="img img-responsive"}
Ako môžete vidieť na hore uvedenom obrázku, je veľmi podobný tomu z ASP.NET WebForms.
<br/>
## Je DotVVM nové WebForms?
---
Tak ako som už spomínal, DotVVM je do značnej miery inšpirované ASP.NET WebForms, ale moderne, a jednoducho. Rieši problémy ktoré som popísal vyššie.
- **ViewModel je ViewState**
- Máme plnú kontrolu nad tým čo a ako tam uložíme.
- **Generovaný výstup**
- Generovaný výstup komponentov je omnoho elegantnejší, bez akýchkoľvek inline štýlov.
- Všetko čo natívne komponenty generujú je popísané v dokumentácii na stránke projektu.
- **Testovateľnosť**
Pre WebForm vývojárov je prechod naozaj jednoduchý.
<br/>
## Komponenty
---
Na [stránke projektu](https://www.dotvvm.com/docs/latest) nájdete zoznam základných komponent ktoré framework obsahuje. Iné si môžeme dokúpiť (O tom poviem neskôr), alebo si sami veľmi jednoducho vytvoriť.
Framework nám ponúka **dve možnosti vytvárania komponent**.
- Jedna sú takzvané **markup controls**, tie majú koncovku dotcontrol. Je to kus markupu, ktorý môžeme opakovať v rôznych stránkach. Ako napríklad Editor adries. (Adresa doručenia, fakturačná adresa) na jednej stránke.
- Potom máme **Code-only controls** tieto nemajú žiadny markup, a všetko sa generuje v kóde. Vieme ich zabaliť do samostatného DLL súboru. Ich vývoj je ale o niečo (možno trošku viac) zložitejší.
<br/>
## Zaujímavé funkcie
---
Ďalej by som chcel spomenúť niekoľko ďalších funkcií ktoré nám framework ponúka.
- Ako som písal vyššie **môžeme si vytvárať vlastné komponenty**.
- Máme tu priamu podporu **ModelValidation**
- Možnosť prepnúť aplikáciu do **SPA režimu** kde sa neprekresľuje pri navigovaní celá stránka, ale prepisuje sa iba content v ContentPlaceholder.
- **ActionFilter** je rovnaký concept ako v MVC
- Priama **integrácia s IoC** (injektovanie závislostí do ViewModelov)
- **Custom presenters** je niečo ako custom HttpHandlers. Použijeme v prípade ak potrebujeme generovať RSS feed, SiteMap a pod.
- **Postback handlers** – pred tým ako sa urobí postback na server vieme vykonať akciu. Napríklad čakať na potvrdenie akcie používateľom, a pod.
<br/>
## Aktuálny stav, Licencia
---
Aktuálne je DotVVM vo verzii 1.0.5
- RTM je pre plný .NET framework
Pracujú na podpore pre .NET Core. Niežeby framework nebol funkčný, ale proste len nie je vo verzii RTM. Pevne verím že počas najbližších pár mesiacov bude RTM verzia, keďže dnes tu máme alfa verziu.
Ako som spomínal na začiatku, je to open source framework, a jeho kompletné zdrojové kódy si môžete pozrieť na [githube](https://github.com/riganti/dotvvm). Tak ako Tomáš hovorí, je a vždy bude open source.
Na čom však chcú utŕžiť dáku korunu sú ďalšie komponenty, alebo samotné **rozšírenie do Visual Studia**, ktoré má dve verzie,
- Free,
- Professional, ktorá má pokročilejšie funkcie, a aktuálne stojí 129 dolárov.
Podrobné porovnanie si môžeme pozrieť na stránke [DotVVM for Visual Studio 2015](https://www.dotvvm.com/landing/dotvvm-for-visual-studio-extension)
Ďalšiu z vecí ktoré si už dnes môžeme zakúpiť je [sada komponent]((https://www.dotvvm.com/landing/bootstrap-for-dotvvm)) pre UI framework [Twitter Bootstrap](http://getbootstrap.com/). V tomto prípade sa jedná naozaj o drobné.
<br/>
## Plány
---
Tak a čo máme plánované do budúcnosti:
- Vydanie verzie pre .NET Core. Ako som už spomínal.
- Vydanie sady komponent pre business aplikácie s názvom [DotVVM Business Pack](https://www.dotvvm.com/landing/business-pack). Bude obsahovať ďalšie komponenty ktoré nám urýchlia vývoj formulárových aplikácií. Súčasťou budú komponenty ako AutoComplete, ComboBox s podporou šablón, DateTimeRangePicker, ImageUpload, GridView s viac možnosťami. Komponentov má byť až 40.
Termín uvedenia na trh v tomto momente netuším.
- Ďalšia vec ktorá sa plánuje sú takzvané [Dynamic data](https://github.com/riganti/dotvvm-dynamic-data). Neviem či ste stým reálne v .NETe niekedy pracovali. Ja osobne nie, len si tak matne spomínam že to vedelo vygenerovať komplet formulár na základe jeho modelu a doplnkových atribútov. Len pre úplnosť doplním blog post o [DotVVM Dynamic data](https://www.dotvvm.com/blog/6/Preview-of-DotVVM-Dynamic-Data-Released).
- Potom tu máme veci ako Minifikácia a bundling JS a CSS súborov.
- Hosting DotVVM aplikácie vo WebView v Xamarin aplikácii, alebo UWP aplikácii. Je to celkom zaujímavé, nechajme sa prekvapiť rýchlosťou takejto aplikácie.
- No a na záver tu máme podporu pre `Electron`. Electron je Framework pomocou ktorého môžeme písať multiplatformové desktopové aplikácie v JS, CSS, a HTML. Nad Electronom je postavené napríklad Visual Sudio Code, GitHub client for Windows, Atom editor, Slack, a mnoho ďalších aplikácií.
## Zdroje
---
Ešte na záver by som uviedol zopár zdrojov ktoré sa vám možno zídu pri ďalšom štúdiu frameworku.
[Domovská stránka DotVVM](https://www.dotvvm.com/)
[Zdrojové kódy DotVVM, a sample](https://github.com/riganti/dotvvm)
[Sample aplikácia Checkbook](https://github.com/riganti/dotvvm-samples-checkbook)
[Video - DotVVM: "Javascript Apps With No Javascript"](https://channel9.msdn.com/Series/NET-DeveloperDays-2015-on-demand/dotVVM-Javascript-Apps-With-No-Javascript-Tomas-Herceg?ocid=player)
[YouTube séria - Naučte se DotVVM](https://www.youtube.com/watch?v=RrvM46uCouE&list=PLAz0LszWXtZbFRITZR8diXokBP72lkM_g)
| 48.550661 | 419 | 0.78051 | slk_Latn | 0.999917 |
f70d076722cfdd6aea81716cd593d3b3617b1711 | 244 | md | Markdown | bash/string.md | NobodyXu/url_bookmarks | eb307bb4c03b814a4386b9aff40de3a5ffc36330 | [
"MIT"
] | null | null | null | bash/string.md | NobodyXu/url_bookmarks | eb307bb4c03b814a4386b9aff40de3a5ffc36330 | [
"MIT"
] | null | null | null | bash/string.md | NobodyXu/url_bookmarks | eb307bb4c03b814a4386b9aff40de3a5ffc36330 | [
"MIT"
] | null | null | null | 1. [Newline in bash literal](https://stackoverflow.com/a/13192701/8375400)
2. [Multi-line string with extra space (preserved indentation)](https://stackoverflow.com/questions/23929235/multi-line-string-with-extra-space-preserved-indentation)
| 81.333333 | 167 | 0.79918 | kor_Hang | 0.25418 |
f70dc36e3931175f1013f0355aa98a719e1c0dee | 3,506 | md | Markdown | articles/commerce/channels-prerequisites.md | MicrosoftDocs/Dynamics-365-Operations.tr-tr | 60fbe90cb8d4cfcd775d48c394e827a7795cfe27 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-05-18T17:15:00.000Z | 2022-03-02T03:46:26.000Z | articles/commerce/channels-prerequisites.md | MicrosoftDocs/Dynamics-365-Operations.tr-tr | 60fbe90cb8d4cfcd775d48c394e827a7795cfe27 | [
"CC-BY-4.0",
"MIT"
] | 8 | 2017-12-08T15:55:56.000Z | 2019-04-30T11:46:11.000Z | articles/commerce/channels-prerequisites.md | MicrosoftDocs/Dynamics-365-Operations.tr-tr | 60fbe90cb8d4cfcd775d48c394e827a7795cfe27 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Kanal kurulum önkoşulları
description: Bu konu, Microsoft Dynamics 365 Commerce'te kanalların kurulum önkoşulları hakkında genel bilgi vermektedir.
author: samjarawan
ms.date: 02/21/2020
ms.topic: article
ms.prod: ''
ms.technology: ''
audience: Application User
ms.reviewer: v-chgri
ms.custom: ''
ms.assetid: ''
ms.search.region: Global
ms.author: samjar
ms.search.validFrom: 2020-01-20
ms.dyn365.ops.version: Release 10.0.8
ms.openlocfilehash: 6ad8911df00fde4675d4d9b52fcdd52ff58d4983b177316a7606de277328226b
ms.sourcegitcommit: 42fe9790ddf0bdad911544deaa82123a396712fb
ms.translationtype: HT
ms.contentlocale: tr-TR
ms.lasthandoff: 08/05/2021
ms.locfileid: "6742476"
---
# <a name="channel-setup-prerequisites"></a>Kanal kurulum önkoşulları
[!include [banner](includes/banner.md)]
Bu konu, Microsoft Dynamics 365 Commerce'te kanal kurulum önkoşulları hakkında genel bilgi vermektedir.
Bir Dynamics 365 Commerce kanalı oluşturulmadan önce, önkoşul olan bazı görevlerin tamamlanması gerekir. Aşağıdaki önkoşul görevleri listeleri kanal türüne göre düzenlenmiştir.
> [!NOTE]
> Bazı belgeler hala yazılıyor ve yeni içerik yayımlandıkça bağlantılar güncelleştirilecek.
## <a name="initialization"></a>Başlatma
- [Çekirdek verileri başlatma](enable-configure-retail-functionality.md)
## <a name="global-prerequisities-required-for-all-channel-types"></a>Tüm kanal türleri için gereken global önkoşullar
- [Tüzel kişilik yapınızı tanımlama ve yapılandırma](channels-legal-entities.md)
- [Organizasyon hiyerarşinizi yapılandırma](channels-org-hierarchies.md)
- [Ambar ayarlama](channels-setup-warehouse.md)
- [Satış vergisini yapılandırma](../finance/general-ledger/indirect-taxes-overview.md?toc=/dynamics365/commerce/toc.json)
- [E-posta bildirimi profili ayarlama](email-notification-profiles.md)
- [Numara serileri ayarlama](../fin-ops-core/fin-ops/organization-administration/number-sequence-overview.md?toc=/dynamics365/commerce/toc.json)
- [Varsayılan müşteriyi ve adres defterini ayarlama](default-customer.md)
<!--
- [Configure commerce parameters](commerce-parameters.md)
-->
## <a name="retail-channel-prerequisites"></a>Perakende kanalı önkoşulları
- [Bilgi kodları ve bilgi kodu grupları](info-codes-retail.md)
- [Perakende işlevselliği profili ayarlama](retail-functionality-profile.md)
- [Çalışan adres defteri ayarlama](new-address-book.md)
- [Ekran düzeni ayarlama](pos-screen-layouts.md)
- [Donanım istasyonu ayarlama](retail-hardware-station-configuration-installation.md)
## <a name="call-center-channel-prerequisites"></a>Çağrı Merkezi kanalı önkoşulları
- Çağrı merkezi parametreleri
- [Çağrı merkezi sipariş ve iade ödeme yöntemleri](work-with-payments.md)
- [Çağrı merkezi teslimat ve ücret modları](configure-call-center-delivery.md)
## <a name="online-channel-prerequisites"></a>Çevrimiçi kanal önkoşulları
- [Çevrimiçi işlevsellik profili oluşturma](online-functionality-profile.md)
## <a name="additional-resources"></a>Ek kaynaklar
[Kanallara genel bakış](channels-overview.md)
[Kuruluşlar ve kuruluş hiyerarşilerine genel bakış](../fin-ops-core/fin-ops/organization-administration/organizations-organizational-hierarchies.md?toc=/dynamics365/commerce/toc.json)
[Kuruluş hiyerarşilerini ayarlama](channels-org-hierarchies.md)
[Tüzel kişilik oluşturma](channels-legal-entities.md)
[Perakende kanalını ayarlama](channel-setup-retail.md)
[Çevrimiçi kanal ayarlama](channel-setup-online.md)
[!INCLUDE[footer-include](../includes/footer-banner.md)]
| 40.767442 | 183 | 0.79692 | tur_Latn | 0.972182 |
f70e11b20ff25179c3117e65f80957a33e07371f | 448 | md | Markdown | _posts/2014/2014-12-31-entrainement-decembre-2014.md | bdossantos/bds.run | 8a11943e861b00010ff1fab1224338b5091f7dba | [
"WTFPL"
] | 2 | 2018-02-01T22:30:41.000Z | 2018-02-02T10:15:07.000Z | _posts/2014/2014-12-31-entrainement-decembre-2014.md | bdossantos/runner.sh | c41e89f56bb633d9a9eacf0028feb246c285d774 | [
"WTFPL"
] | 6 | 2019-05-23T15:31:57.000Z | 2021-04-19T04:45:53.000Z | _posts/2014/2014-12-31-entrainement-decembre-2014.md | bdossantos/runner.sh | c41e89f56bb633d9a9eacf0028feb246c285d774 | [
"WTFPL"
] | null | null | null | ---
layout: post
title: Entrainement décembre 2014
description: Bilan sportif du mois de décembre 2014
category: entrainement
---
| Séances | 8 |
| Distance | 69.95 km |
| Durée | 6:23:53 h:m:s |
| Gain d'altitude | 659 m |
| Vitesse moyenne | 10.9 km/h |
Très peu de kilomètres, mais du D+ notamment grâce à la [Saintésprint][1].
[1]: {% post_url /2014/2014-12-13-saintesprint %}
| 26.352941 | 74 | 0.587054 | fra_Latn | 0.862286 |
f70e54b1ba200c762c90bd32f1befb3ef4268ab6 | 834 | md | Markdown | README.md | CristianSifuentes/FlaskWebApp | 09198403d3e633cb3ca94a70df22416f8e06c162 | [
"MIT"
] | 1 | 2020-06-27T00:35:07.000Z | 2020-06-27T00:35:07.000Z | README.md | CristianSifuentes/FlaskWebApp | 09198403d3e633cb3ca94a70df22416f8e06c162 | [
"MIT"
] | null | null | null | README.md | CristianSifuentes/FlaskWebApp | 09198403d3e633cb3ca94a70df22416f8e06c162 | [
"MIT"
] | null | null | null | # FlaskWebApp
This is a repository where we are developing a new web app using flask for python
# Commands used
* virtualenv -p python3 env
* env\Scripts\activate
* pip install Flask
* pip install Flask-Script
* python manage.py runserver
* pip install Flask-Bootstrap4
* pip install WTForms
* pip install flask-wtf
* pip install Flask-SQLAlchemy
* pip install mysqlclient (it´s was not used)
* pip install mysql-connector-python
* pip install email_validator
* pip install Werkzeug
* pip install flask-login
* pip install flask-mail
* pip install python-decouple
# Links importants
* https://pypi.org/project/mysql-connector-python/
* https://docs.sqlalchemy.org/en/13/dialects/mysql.html#module-sqlalchemy.dialects.mysql.mysqlconnector
* https://stackoverflow.com/questions/14164183/python-3-and-mysql-through-sqlalchemy | 30.888889 | 103 | 0.781775 | eng_Latn | 0.593221 |
f70e634d669c3e0d15fa1c9cb5ad336ab4e4c772 | 2,444 | md | Markdown | content/publication/scite_project/index.md | masonrhayes/my-website | fbfd5a88eb18888765dc2a2a5da9d0105bf682d4 | [
"MIT"
] | null | null | null | content/publication/scite_project/index.md | masonrhayes/my-website | fbfd5a88eb18888765dc2a2a5da9d0105bf682d4 | [
"MIT"
] | null | null | null | content/publication/scite_project/index.md | masonrhayes/my-website | fbfd5a88eb18888765dc2a2a5da9d0105bf682d4 | [
"MIT"
] | null | null | null | ---
# Documentation: https://wowchemy.com/docs/managing-content/
title: "Quantity over Quality? Analyzing Journal-Level Citations with scite"
authors: [Mason Hayes]
date: 2020-04-18T17:15:28+02:00
doi: ""
# Schedule page publish date (NOT publication's date).
publishDate: 2020-04-18T17:15:28+02:00
# Publication type.
# Legend: 0 = Uncategorized; 1 = Conference paper; 2 = Journal article;
# 3 = Preprint / Working Paper; 4 = Report; 5 = Book; 6 = Book section;
# 7 = Thesis; 8 = Patent
publication_types: ["0"]
# Publication name and optional abbreviated publication name.
publication: ""
publication_short: ""
abstract: "Gathering journal-level citation data from the scite API through the [sciteR](https://github.com/masonrhayes/sciteR) package, which
I created for this paper, I find that journal quality varies widely across disciplines. Mathematics, physics, and chemistry have a much higher scite journal index than do medicine and economics, for example. Using a sample of 589 journals across the disciplines of math, physics, chemistry, biology, business, economics, and medicine, I attempt to use a machine-learning model to classify each journal by subject according to its scite journal index (sji)."
# Summary. An optional shortened abstract.
summary: ""
tags: []
categories: []
featured: false
# Custom links (optional).
# Uncomment and edit lines below to show custom links.
# links:
# - name: Follow
# url: https://twitter.com
# icon_pack: fab
# icon: twitter
url_pdf: "content/publications/scite_project.pdf"
url_code:
url_dataset:
url_poster:
url_project:
url_slides:
url_source:
url_video:
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ""
focal_point: ""
preview_only: false
# Associated Projects (optional).
# Associate this publication with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `internal-project` references `content/project/internal-project/index.md`.
# Otherwise, set `projects: []`.
projects: []
# Slides (optional).
# Associate this publication with Markdown slides.
# Simply enter your slide deck's filename without extension.
# E.g. `slides: "example"` references `content/slides/example/index.md`.
# Otherwise, set `slides: ""`.
slides: ""
---
| 34.422535 | 457 | 0.742635 | eng_Latn | 0.887429 |
f70f01e937f2cc2cf13f13a9eb04b6465448ccbe | 446 | md | Markdown | _posts/2017-06-15-vue app day03 完善结构.md | toBeTheLight/toBeTheLight.github.io | af46217b81e8db99f028e5eadce392c8b3cc7553 | [
"MIT"
] | 2 | 2017-03-08T03:00:19.000Z | 2017-04-17T11:35:34.000Z | _posts/2017-06-15-vue app day03 完善结构.md | toBeTheLight/toBeTheLight.github.io | af46217b81e8db99f028e5eadce392c8b3cc7553 | [
"MIT"
] | 7 | 2017-12-21T13:55:09.000Z | 2019-12-18T12:18:52.000Z | _posts/2017-06-15-vue app day03 完善结构.md | toBeTheLight/toBeTheLight.github.io | af46217b81e8db99f028e5eadce392c8b3cc7553 | [
"MIT"
] | null | null | null | ---
layout: post
title: "imooc-app day03 项目结构完善"
categories: vue
tags: 项目 vue
author: toBeTheLight
---
* content
{:toc}
使用Vue2.x Vuex webpack axios仿慕课网app。第三天。
完善项目结构,区分模块
##完善结构
完善目录结构
```
├── src/
│ ├── assets/ //引入静态资源
│ ├── components/ //公用组件
│ ├── router/ //路由管理
│ ├── service/ //api服务管理及缓存管理
│ ├── store/ //vue状态管理
│ └── pages/ //路由页面
```
可以开始写代码了o(* ̄▽ ̄*)ブ
| 15.37931 | 40 | 0.506726 | yue_Hant | 0.436688 |
f70f29288b2e536ae9e6f386ad8af5f62255687b | 781 | md | Markdown | _posts/2021-04-08-burnout.md | yduf/yduf.github.io | ffb34c791fc8962904c8d6f1c2245432745f6623 | [
"MIT"
] | null | null | null | _posts/2021-04-08-burnout.md | yduf/yduf.github.io | ffb34c791fc8962904c8d6f1c2245432745f6623 | [
"MIT"
] | 3 | 2019-09-29T13:42:27.000Z | 2021-10-06T20:20:31.000Z | _posts/2021-04-08-burnout.md | yduf/yduf.github.io | ffb34c791fc8962904c8d6f1c2245432745f6623 | [
"MIT"
] | null | null | null | ---
published: true
title: Burnout
tags: despair.com job
---
> Everyone talks about "burnout" as something that happens when you work too much.
>
> I see it far more in people who work a normal amount on things they know don't matter. - [twitter](https://twitter.com/KaseyKlimes/status/1375801723403505664) / [HN](https://news.ycombinator.com/item?id=26742065)
[Burnout and exhaustion are not the same](https://codecapsule.com/2021/07/28/burnout-is-real-and-its-no-picnic/), burnout comes from a combination of six main factors:
- Unsustainable workload
- Perceived lack of control
- Insufficient rewards for effort
- Lack of a supportive community
- Lack of fairness
- Mismatched values and skills
see also:
- [Post Burnout Ideas](https://news.ycombinator.com/item?id=27410951)
| 39.05 | 214 | 0.765685 | eng_Latn | 0.964461 |
f71072866101c6a6d3704ebe4d59d5cefecb0ef2 | 1,250 | md | Markdown | src/es/2020-01-cq/11/01.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/es/2020-01-cq/11/01.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/es/2020-01-cq/11/01.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Transparencia total
date: 07/03/2020
---
**inTro**
### Franqueza
Vivimos en un mundo en el que falta transparencia. Ya sea en el gobierno, en la familia, en el matrimonio o en el entorno empresarial, parece estar de moda esconder información importante. «Todo el mundo esconde algo» es un concepto muy extendido hoy.
Lo cierto es que la información es poder, así que quien tiene la información necesaria la puede manipular a su antojo; y quien manipula información, puede terminar en una posición donde le es fácil abusar del poder.
Dios es el Ser más poderoso del universo, no solo en términos físicos, sino también porque él tiene toda la información; sin embargo, a pesar de eso, ha elegido, y continúa eligiendo, el camino de la transparencia y la franqueza en su trato con el ser humano. Es obvio que, para él. nuestra salvación eterna tiene mucho que ver con su transparencia y con su modo de compartir con nosotros toda la información que sea relevante.
**Escríbelo aquí**
- Copia de tu versión preferida de la Biblia Daniel 7: 9-14.
- Para abreviar puedes copiar únicamente los versículos 9 y 10.
- O si lo prefieres, puedes parafrasear el pasaje bíblico utilizando tus propias palabras, resumirlo o hacer un bosquejo.
| 54.347826 | 427 | 0.7792 | spa_Latn | 0.999591 |
f712b4d7664e777f299e0f4bc2ef084aaabe06e0 | 3,573 | md | Markdown | Round E/High Buildings/High Buildings_Question.md | imsushant12/GoogleKickStart2020_Solutions | 18bf0d6eecd67585bce708590265541080f14cc0 | [
"MIT"
] | 130 | 2020-10-01T15:51:39.000Z | 2022-03-25T21:57:01.000Z | Round E/High Buildings/High Buildings_Question.md | imsushant12/GoogleKickStart2020_Solutions | 18bf0d6eecd67585bce708590265541080f14cc0 | [
"MIT"
] | 76 | 2020-09-30T20:35:13.000Z | 2021-11-16T18:20:43.000Z | Round E/High Buildings/High Buildings_Question.md | imsushant12/GoogleKickStart2020_Solutions | 18bf0d6eecd67585bce708590265541080f14cc0 | [
"MIT"
] | 82 | 2020-09-28T20:40:11.000Z | 2022-02-19T10:42:21.000Z | ## Problem: High Buildings
In an unspecified country, Google has an office campus consisting of **N** office buildings in a line, numbered from 1 to **N** from left to right. When represented in meters, the height of each building is an integer between 1 to **N**, inclusive.
Andre and Sule are two Google employees working in this campus. On their lunch break, they wanted to see the skyline of the campus they are working in. Therefore, Andre went to the leftmost point of the campus (to the left of building 1), looking towards the rightmost point of the campus (to the right of building **N**). Similarly, Sule went to the rightmost point of the campus, looking towards the leftmost point of the campus.
To Andre, a building x is visible if and only if there is no building to the left of building x that is strictly higher than building x. Similarly, to Sule, a building x is visible if and only if there is no building to the right of building x that is strictly higher than building x.
Andre learned that there are **A** buildings that are visible to him, while Sule learned that there are **B** buildings that are visible to him. After they regrouped and exchanged information, they also learned that there are **C** buildings that are visible to both of them.
They are wondering about the height of each building. They are giving you the value of **N**, **A**, **B**, and **C** for your information. As their friend, you would like to construct a possible height for each building such that the information learned on the previous paragraph is correct, or indicate that there is no possible height construction that matches the information learned (thus at least one of them must have been mistaken).
**Input**
The first line of the input gives the number of test cases, **T**. **T** test cases follow. Each test case begins with a single line with four integers **N**, **A**, **B**, and **C**: the information given by Andre and Sule.
**Output**
For each test case, output one line containing `Case #x: y`, where `x` is the test case number (starting from 1) and `y` is `IMPOSSIBLE` if there is no possible height for each building according to the above information, or **N** space-separated integers otherwise. The i-th integer in y must be the height of the i-th building (in meters) between 1 to **N**.
**Limits**
- Time limit: `20 seconds per test set.`
- Memory limit: `1GB.`
```
1 ≤ T ≤ 100.
1 ≤ C ≤ N.
C ≤ A ≤ N.
C ≤ B ≤ N.
```
- Test set 1
`1 ≤ N ≤ 5.`
- Test set 2
`1 ≤ N ≤ 100.`
**Sample**
- Input:
```
3
4 1 3 1
4 4 4 3
5 3 3 2
```
- Output:
```
Case #1: 4 1 3 2
Case #2: IMPOSSIBLE
Case #3: 2 1 5 5 3
```
**Explaination**
* In Sample Case #1, the sample output sets the height of each building such that only the first building is visible to Andre, while the first, third, and fourth buildings are visible to Sule. Therefore, only the first building is visible to both Andre and Sule. Note that there exist other correct solutions, such as 4 3 1 2.
* In Sample Case #2, all N = 4 buildings are visible to Andre and Sule. Therefore, it is impossible to have C ≠ N in this case.
* In Sample Case #3, the sample output sets the height of each building such that the first, third, and fourth buildings are visible to Andre, while the third, fourth, and fifth buildings are visible to Sule. Therefore, the third and fourth buildings are visible to both Andre and Sule. Note that there exist other correct solutions.
**Go to the Editor:** <https://codingcompetitions.withgoogle.com/kickstart/round/000000000019ff47/00000000003bef73>
| 54.136364 | 440 | 0.735516 | eng_Latn | 0.999937 |
f712ed6ec3dbf0674e7b2640ad40ce0302706a47 | 1,024 | md | Markdown | README.md | myuserHQ/myuserpay-js | c7c8a5e93f693ec3dc5c9cc1ed2e0b4d0fbc1d43 | [
"MIT"
] | null | null | null | README.md | myuserHQ/myuserpay-js | c7c8a5e93f693ec3dc5c9cc1ed2e0b4d0fbc1d43 | [
"MIT"
] | null | null | null | README.md | myuserHQ/myuserpay-js | c7c8a5e93f693ec3dc5c9cc1ed2e0b4d0fbc1d43 | [
"MIT"
] | null | null | null | # myuserpay
Myuser.com API implementation for JavaScript based backends.
### Installation
You can install package to your project using _npm_ or _yarn_.
```bash
npm install myuserpay
# or
yarn add myuserpay
```
# Getting Started
You have to obtain your free account from [myuser.com](https://myuser.com) to get started. After creating your account you'll get your private and public keys which is required to use this library.
```js
// CommonJS
const myuser = require("myuserpay")("your-private-key");
// EcmaScript / TypeScript
import createMyuser from "myuserpay";
const myuser = createMyuser("your-private-key");
```
## Charge
Capture funds from customer's credit card.
```js
app.post("/charge", async (req, res) => {
const result = myuser.charge({
token: req.body.MyUserToken,
amount: 1000, // 10 USD
});
if (result.status) {
// Save payment to database ...
} else {
// Show error message
}
});
```
# Example
Check out `/example` directory for the example usage of this library.
| 20.078431 | 197 | 0.702148 | eng_Latn | 0.965532 |
f713bf951a983f4bd0abd0ce0eab735f24a46ae6 | 306 | md | Markdown | docs/Startup.md | aerojs/aero | 7eb79739cf34bd0f709b103e610fdb5b641308e1 | [
"MIT"
] | 198 | 2016-01-10T21:31:45.000Z | 2021-11-16T13:05:48.000Z | docs/Startup.md | blitzprog/fw | 7eb79739cf34bd0f709b103e610fdb5b641308e1 | [
"MIT"
] | 67 | 2015-11-08T12:45:39.000Z | 2018-03-14T16:33:42.000Z | docs/Startup.md | blitzprog/fw | 7eb79739cf34bd0f709b103e610fdb5b641308e1 | [
"MIT"
] | 25 | 2016-02-14T08:06:29.000Z | 2022-01-26T08:35:18.000Z | # Startup
All `.js` files inside the `startup` directory are loaded as node modules when your app runs. It is appropriate to add app configuration and middleware via separate modules inside `startup` instead of configuring everything inside one file only. Try to keep your `index.js` as clean as possible. | 102 | 295 | 0.794118 | eng_Latn | 0.998648 |
f7149e5d5a2578510e4f8da4e75237681b60bdc5 | 191 | md | Markdown | .changes/file-dialog-refactor.md | truexin1292/tauri | c16a2730ffcde62e58f4db8b8a2f999d157ecdc4 | [
"MIT"
] | 1 | 2021-02-24T08:05:13.000Z | 2021-02-24T08:05:13.000Z | .changes/file-dialog-refactor.md | truexin1292/tauri | c16a2730ffcde62e58f4db8b8a2f999d157ecdc4 | [
"MIT"
] | null | null | null | .changes/file-dialog-refactor.md | truexin1292/tauri | c16a2730ffcde62e58f4db8b8a2f999d157ecdc4 | [
"MIT"
] | null | null | null | ---
"tauri-api": minor
"api": minor
---
The file dialog API now uses [rfd](https://github.com/PolyMeilex/rfd). The filter option is now an array of `{ name: string, extensions: string[] }`.
| 27.285714 | 149 | 0.675393 | eng_Latn | 0.925168 |
f714d88e664023e711c91f382f4e484f065c0386 | 42 | md | Markdown | README.md | Developer3027/family-recipe | 3740b60873d72b1764ea629e039c98dd0312446e | [
"MIT"
] | null | null | null | README.md | Developer3027/family-recipe | 3740b60873d72b1764ea629e039c98dd0312446e | [
"MIT"
] | 5 | 2020-09-07T15:29:52.000Z | 2022-02-26T19:17:32.000Z | README.md | Developer3027/family-recipe | 3740b60873d72b1764ea629e039c98dd0312446e | [
"MIT"
] | null | null | null | # family-recipe
mom's cookbook is digital
| 14 | 25 | 0.785714 | eng_Latn | 0.990446 |
f716056d092cddfe9800cea96d6d834064220404 | 1,493 | md | Markdown | WindowsServerDocs/administration/windows-commands/ftp-type.md | hsebs/windowsserverdocs.ko-kr | 5df4d1fdc437c37e2c89d3c188db7354df97dd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/administration/windows-commands/ftp-type.md | hsebs/windowsserverdocs.ko-kr | 5df4d1fdc437c37e2c89d3c188db7354df97dd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/administration/windows-commands/ftp-type.md | hsebs/windowsserverdocs.ko-kr | 5df4d1fdc437c37e2c89d3c188db7354df97dd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ftp 유형
description: '* * * *에 대 한 참조 항목'
ms.prod: windows-server
ms.technology: manage-windows-commands
ms.topic: article
ms.assetid: 6e96dcd4-08f8-4e7b-90b7-1e1761fea4c7 vhorne
author: coreyp-at-msft
ms.author: coreyp
manager: dongill
ms.date: 10/16/2017
ms.openlocfilehash: 5531da30118914599ed0f85bfd10bd02ae89ffcf
ms.sourcegitcommit: ab64dc83fca28039416c26226815502d0193500c
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 05/01/2020
ms.locfileid: "82725082"
---
# <a name="ftp-type"></a>ftp: 형식
> 적용 대상: Windows Server (반기 채널), Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
설정 하거나 파일 전송 유형을 표시 합니다.
## <a name="syntax"></a>구문
```
type [<typeName>]
```
#### <a name="parameters"></a>매개 변수
| 매개 변수 | 설명 |
|--------------|-----------------------------------|
| [<typeName>] | 파일 전송 유형을 지정합니다. |
## <a name="remarks"></a>설명
- *typeName* 을 지정 하지 않으면 현재 형식이 표시 됩니다.
- **ftp** 는 ASCII 및 이진 이라는 두 가지 파일 전송 형식을 지원 합니다.
기본 파일 전송 유형이 ASCII입니다. **ascii** 명령은 텍스트 파일을 전송할 때 사용 됩니다. ASCII 모드에서는 네트워크 표준 문자 집합 사이에 문자 변환이 수행 됩니다. 줄 끝 문자 그대로 변환 되는 예를 들어 대상에서 운영 체제에 따라, 필요 합니다.
**이진** 명령은 실행 파일을 전송할 때 사용 해야 합니다. 이진 모드 파일은 1 바이트 단위로 이동 됩니다.
## <a name="examples"></a>예
ASCII 파일 전송 유형을 설정 합니다.
```
type ascii
```
이진 파일 형식을 전송 설정 합니다.
```
type binary
```
## <a name="additional-references"></a>추가 참조
- - [명령줄 구문 키](command-line-syntax-key.md)
| 29.86 | 155 | 0.622237 | kor_Hang | 0.999704 |
f7162ab2f4185ccbb1bfb699e9b411773ea9f2e5 | 12,743 | md | Markdown | Skype/SfbServer/plan-your-deployment/conferencing/conferencing-topology.md | isabella232/OfficeDocs-SkypeForBusiness.de-DE | 36002f4e7303a572fe24c40db59e1ae6c48207d0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Skype/SfbServer/plan-your-deployment/conferencing/conferencing-topology.md | isabella232/OfficeDocs-SkypeForBusiness.de-DE | 36002f4e7303a572fe24c40db59e1ae6c48207d0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-10-09T18:09:59.000Z | 2021-10-09T18:09:59.000Z | Skype/SfbServer/plan-your-deployment/conferencing/conferencing-topology.md | isabella232/OfficeDocs-SkypeForBusiness.de-DE | 36002f4e7303a572fe24c40db59e1ae6c48207d0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Planen der Konferenztopologie für Skype for Business Server
ms.reviewer: ''
ms.author: v-cichur
author: cichur
manager: serdars
audience: ITPro
ms.topic: conceptual
ms.prod: skype-for-business-itpro
f1.keywords:
- NOCSH
ms.localizationpriority: medium
ms.assetid: 7392dfa7-791a-4723-88ff-0ef8a9ef11c8
description: 'Zusammenfassung: Lesen Sie dieses Thema, um mehr über die Planung Ihrer Konferenztopologie in Skype for Business Server zu erfahren.'
ms.openlocfilehash: 09d793a75ab72ef96d3ded85156c99a7590e087d
ms.sourcegitcommit: 15e90083c47eb5bcb03ca80c2e83feffe67646f2
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 08/30/2021
ms.locfileid: "58732634"
---
# <a name="plan-your-conferencing-topology-for-skype-for-business-server"></a>Planen der Konferenztopologie für Skype for Business Server
**Zusammenfassung:** Lesen Sie dieses Thema, um mehr über die Planung Ihrer Konferenztopologie in Skype for Business Server zu erfahren.
In diesem Thema werden die Topologiegrundlagen für Konferenzen in Skype for Business Server beschrieben:
- Unterstützte Topologien
- Überlegungen zu Einwahlkonferenzen
- Überlegungen zu Webkonferenzen
- Anforderungen für große Besprechungen
Weitere Informationen zu Hardware- und Softwareanforderungen finden Sie unter [Hardware- und Softwareanforderungen für Konferenzen in Skype for Business Server.](hardware-and-software-requirements.md)
## <a name="supported-topologies"></a>Unterstützte Topologien
In Skype for Business Server wird der Server, auf dem Konferenzdienste ausgeführt werden, immer mit den Front-End-Servern oder Standard Edition Servern verbunden. Wenn Sie Skype for Business Server bereitstellen, werden die Chatkonferenzfunktionen automatisch bereitgestellt. Mithilfe des Topologie-Generators können Sie angeben, ob Web-, Audio- und Videokonferenzen (A/V) und Einwahlkonferenzen bereitgestellt werden sollen. Sie können den Topologie-Generator auch verwenden, um einer vorhandenen Bereitstellung Konferenzen hinzuzufügen. Ausführliche Informationen zu Topologiegrundlagen und Kollokationsszenarien finden Sie unter [Topologiegrundlagen für Skype for Business Server](../../plan-your-deployment/topology-basics/topology-basics.md).
Sie können Konferenzen in den folgenden Topologien und Konfigurationen bereitstellen:
- Skype for Business Server Standard Edition
- Skype for Business Server Enterprise Edition
- Mit oder ohne Enterprise-VoIP
## <a name="dial-in-conferencing-considerations"></a>Überlegungen zu Einwahlkonferenzen
Wenn Sie Einwahlkonferenzen bereitstellen, müssen Sie Folgendes berücksichtigen:
- Einwahlkonferenzen erfordern, dass ein Vermittlungsserver Die Signalisierung (und Medien in einigen Konfigurationen) zwischen Skype for Business Server und dem PSTN-Gateway übersetzt, und ein PSTN-Gateway zum Übersetzen von Signalen und Medien zwischen dem Vermittlungsserver und dem PSTN-Gateway.
Bevor Sie Einwahlkonferenzen konfigurieren können, müssen Sie entweder Enterprise-VoIP oder einen Vermittlungsserver und mindestens eine der folgenden Optionen bereitstellen:
- PSTN-Gateway
- IP-Nebenstellenanlage
- Session Border Controller (SBC) (für einen Anbieter von Internettelefoniediensten, mit dem Sie eine Verbindung herstellen, indem Sie einen SIP-Trunk konfigurieren)
- Sie können den Anwendungsdienst, Konferenzzentralenanwendung und Konferenzankündigungsanwendung an einem zentralen Standort, aber nicht an einem Zweigstellenstandort bereitstellen.
- Sie müssen Einwahlkonferenzen in jedem Pool bereitstellen, in dem Sie Skype for Business Server Konferenzen bereitstellen. Sie müssen nicht in jedem Pool Zugriffsnummern zuweisen, aber Sie müssen das Feature für Einwahlkonferenzen in jedem Pool bereitstellen. Diese Anforderung unterstützt das Feature für aufgezeichnete Namen, wenn ein Benutzer eine Zugriffsnummer aus einem Pool aufruft, um an einer Skype for Business Server Konferenz in einem anderen Pool teilzunehmen.
Weitere Informationen finden Sie unter [Plan for dial-in conferencing in Skype for Business Server](dial-in-conferencing.md).
## <a name="web-conferencing-considerations"></a>Überlegungen zu Webkonferenzen
Webkonferenzen erfordern Folgendes:
- Zugriff auf den Dateispeicher, der zum Speichern von Webkonferenzinhalten verwendet wird.
- Integration in Office Web Apps Server/Office Online Server, was erforderlich ist, um PowerPoint Dateien während einer Konferenz freizugeben.
> [!NOTE]
> Die neueste Iteration von Office Web Apps-Server heißt Office Online Server, die von Skype for Business Server unterstützt wird. Weitere Informationen finden Sie in der [Office Online Server Dokumentation.](/officeonlineserver/office-online-server)
Skype for Business Server bietet die folgenden Möglichkeiten zum Konfigurieren Office Web Apps-Servers/Office Online Server. Je nach Ihren Anforderungen haben Sie folgende Möglichkeiten:
- **Installieren Sie sowohl Skype for Business Server als auch Office Web Apps-Server/Office Online Server lokal hinter der Firewall Ihrer Organisation und in derselben Netzwerkzone.** Bei dieser Topologie wird der externe Zugriff auf Office Web Apps-Server/Office Online Server über den Reverseproxyserver bereitgestellt. Idealerweise sollten Sie Office Web Apps-Server/Office Online Server in derselben Netzwerkzone wie Skype for Business Server installieren.
Externe Skype for Business Clients können eine Verbindung mit Skype for Business Server und mit Office Web Apps-Server/Office Online Server herstellen, indem sie einen Reverseproxyserver verwenden, bei dem es sich um einen Server handelt, der Anforderungen aus dem Internet entgegennimmt und an das interne Netzwerk weiterleitet. (Interne Clients müssen den Reverseproxyserver nicht verwenden, da sie eine direkte Verbindung mit Office Web Apps-Server/Office Online Server herstellen können.) Diese Topologie funktioniert am besten, wenn Sie einen dedizierten Office Web Apps-Server/Office Online Server-Farm verwenden möchten, der nur von Skype for Business Server verwendet wird.
- **Verwenden Sie einen extern bereitgestellten Office Web Apps-Server/Office Online Server.** In dieser Topologie wird Skype for Business Server lokal bereitgestellt und verwendet einen Office Web Apps-Server/Office Online Server, der außerhalb der Skype for Business Server Netzwerkzone bereitgestellt wird. Dies kann passieren, wenn Office Web Apps-Server/Office Online Server für mehrere Anwendungen im Unternehmen freigegeben ist und in einem Netzwerk bereitgestellt wird, das Skype for Business Server erfordert, um die externe Schnittstelle von Office Web Apps Server/Office Online Server zu verwenden und umgekehrt.
Sie müssen keinen Reverseproxyserver installieren. Stattdessen werden alle Anforderungen vom Office Web Apps-Server/Office Online Server an Skype for Business Server über den Edgeserver weitergeleitet. Sowohl die internen als auch die externen Skype for Business Clients stellen über die externe URL eine Verbindung mit Office Web Apps-Server/Office Online Server her.
Wenn der Office Web Apps-Server/Office Online Server außerhalb der internen Firewall bereitgestellt wird, wählen Sie die Option **aus, Office Web Apps-Server in einem externen Netzwerk** (d. h. Umkreis/Internet) im Topologie-Generator bereitgestellt wird.
Weitere Informationen finden Sie unter Konfigurieren der [Integration mit Office Web Apps Server in Skype for Business Server.](../../deploy/deploy-conferencing/office-web-app-server.md)
Unabhängig von der ausgewählten Topologie ist es wichtig, dass die richtigen Firewallports geöffnet werden. Sie müssen sicherstellen, dass DNS-Namen, IP-Adressen und Ports nicht durch Firewalls auf dem Office Web Apps-Server/Office Online Server, dem Lastenausgleich oder Skype for Business Server blockiert werden.
> [!NOTE]
> Eine weitere Option für den externen Zugriff auf Office Web Apps-Server/Office Online Server ist die Bereitstellung des Servers im Umkreisnetzwerk. Wenn Sie sich dafür entscheiden, denken Sie daran, dass Office Web Apps Server/Office Online Server Setup erfordert, dass der Servercomputer Mitglied Ihrer Active Directory-Domäne ist. Es wird empfohlen, Office Web Apps Server/Office Online Server nicht im Umkreisnetzwerk zu installieren, es sei denn, ihre Netzwerkrichtlinie erlaubt, dass Computer im Umkreisnetzwerk Active Directory-Domänenmitglieder sind. Stattdessen sollten Sie Office Web Apps-Server/Office Online Server im internen Netzwerk installieren und externen Benutzerzugriff über den Reverseproxyserver gewähren.
## <a name="topology-requirements-for-large-meetings"></a>Topologieanforderungen für große Besprechungen
Eine einzelne große Besprechung erfordert mindestens einen Front-End-Server und einen Back-End-Server. Um hohe Verfügbarkeit zu gewährleisten, empfehlen wir jedoch einen zwei Front-End-Serverpool mit gespiegelten Back-End-Servern, wie im folgenden Diagramm dargestellt:
**Topologie für große Besprechungen**
![Topologie für große Besprechungen.](../../media/06858900-a262-4a47-96d0-51abd6827064.png)
Der Benutzer, der die großen Besprechungen hostet, muss sein Benutzerkonto im Front-End-Pool verwaltet haben. Es wird jedoch nicht empfohlen, dass andere Benutzerkonten in diesem Pool gehostet werden. Er sollte nur für diese großen Besprechungen verwendet werden. In diesem Pool sollte ein spezielles Benutzerkonto erstellt werden, das nur zur Durchführung großer Besprechungen verwendet wird. Da die Einstellung für große Besprechungen für die Leistung optimiert ist, kann die Verwendung als normaler Benutzer Probleme haben, z. B. die Unfähigkeit, eine P2P-Sitzung zu einer Besprechung hochzustufen, wenn ein PSTN-Endpunkt beteiligt ist.
Bei der Verwaltung eines Pools mit genau zwei Front-End-Servern sind spezielle Überlegungen erforderlich. Weitere Informationen finden Sie unter [Topologiegrundlagen für Skype for Business Server 2015](../../plan-your-deployment/topology-basics/topology-basics.md) und [Referenztopologien für Skype for Business Server 2015.](../../plan-your-deployment/topology-basics/reference-topologies.md)
Wenn Sie optional eine Sicherung der Notfallwiederherstellung und ein Failover für den Pool bereitstellen möchten, der für große Besprechungen verwendet wird, können Sie ihn mit einem ähnlich eingerichteten dedizierten Pool in einem anderen Rechenzentrum koppeln. Ausführliche Informationen finden Sie unter [Plan for high availability and disaster recovery in Skype for Business Server](../../plan-your-deployment/high-availability-and-disaster-recovery/high-availability-and-disaster-recovery.md).
Zusätzliche Notizen zur Topologie:
- Eine Dateifreigabe ist zum Speichern von Besprechungsinhalten und, wenn der Archivierungsserver bereitgestellt und aktiviert ist, zum Speichern der Archivierungsdateien erforderlich. Die Dateifreigabe kann dem Pool zugeordnet werden oder dieselbe Dateifreigabe sein, die von einem anderen Pool an diesem Standort verwendet wurde, an dem der Pool bereitgestellt wurde. Ausführliche Informationen zum Konfigurieren der Dateifreigabe finden Sie unter [Erstellen einer Dateifreigabe in Skype for Business Server 2015.](../../deploy/install/create-a-file-share.md)
- Ein Office Web Apps-Server/Office Online Server ist erforderlich, um die PowerPoint Präsentationsfunktion in großen Besprechungen zu aktivieren. Der Office Web Apps-Server/Office Online Server kann für den großen Besprechungspool vorgesehen sein oder dieselbe Office Web Apps-Server/Office Online Server sein, die von anderen Pools am Standort verwendet werden, an dem der dedizierte Pool bereitgestellt wird. Weitere Informationen finden Sie unter Konfigurieren der [Integration mit Office Web Apps Server in Skype for Business Server.](../../deploy/deploy-conferencing/office-web-app-server.md)
- Für den Lastenausgleich der Front-End-Server ist ein Hardwarelastenausgleich für den HTTP-Datenverkehr (z. B. herunterladen von Besprechungsinhalten) erforderlich. DNS-Lastenausgleich wird für den SIP-Datenverkehr empfohlen. Ausführliche Informationen finden Sie unter [Lastenausgleichsanforderungen für Skype for Business.](../../plan-your-deployment/network-requirements/load-balancing.md)
- Wenn Sie den Monitoring Server für den dedizierten Pool für große Besprechungen verwenden möchten, empfehlen wir die Verwendung des Überwachungsservers und seiner Datenbank, die für alle Front-End-Serverpools in Ihrer Skype for Business Server Bereitstellung freigegeben sind. Weitere Informationen finden Sie unter [Plan for monitoring in Skype for Business Server](../../plan-your-deployment/monitoring.md).
| 103.601626 | 747 | 0.821392 | deu_Latn | 0.993175 |
f716547f08ce5ae77522791b076cdc273391b1a7 | 2,857 | md | Markdown | apiconcepts/projectautomation/translation_memory_settings.md | CrinaS/studio-api-docs | f5f977f92d398c1a60979af66e324a87411926b0 | [
"MIT"
] | 2 | 2021-10-06T17:14:46.000Z | 2022-02-03T14:40:30.000Z | apiconcepts/projectautomation/translation_memory_settings.md | CrinaS/studio-api-docs | f5f977f92d398c1a60979af66e324a87411926b0 | [
"MIT"
] | 5 | 2021-09-17T16:28:37.000Z | 2022-03-01T13:37:11.000Z | apiconcepts/projectautomation/translation_memory_settings.md | CrinaS/studio-api-docs | f5f977f92d398c1a60979af66e324a87411926b0 | [
"MIT"
] | 11 | 2021-08-10T08:08:31.000Z | 2022-03-01T08:58:23.000Z | Translation Memory Settings
==
Through the API you can also fine-tune the translation memory settings for the entire project or for each language pair. Example: The default minimum match value for TM searches is 70%. This means that by default <Var:ProductName> only offers fuzzy matches if they have a score of 70% or above. Depending on the project (or target language) requirements, however, you may use a lower or a higher minimum fuzzy value.
Implement a function called ```ConfigureTmSettings```, which takes a [FileBasedProject](../../api/projectautomation/Sdl.ProjectAutomation.FileBased.FileBasedProject.yml) object as parameter. Within this function apply the ```GetSettings``` method to the project to create a settings bundle object (```ISettingsBundle```) for the project. In the next step create an object derived from [TranslationMemorySettings](../../api/projectautomation/Sdl.ProjectAutomation.Settings.TranslationMemorySettings.yml) through which you can configure the various TM settings as shown in the code example below:
# [C#](#tab/tabid-1)
```CS
ISettingsBundle settings = project.GetSettings();
TranslationMemorySettings tmSettings = settings.GetSettingsGroup<TranslationMemorySettings>();
```
***
After that, you can apply the various properties available for fine-tuning the TM settings. The following chapters provide examples of the TM settings that can be configured for a project.
As mentioned above, you can also create translation memory settings specifically for a particular language direction. In that case you need to provide a [Language](../../api/core/Sdl.Core.Globalization.Language.yml) object as parameter when creating the target language-specific settings bundle as shown in the example below:
# [C#](#tab/tabid-2)
```CS
ISettingsBundle deSettings = project.GetSettings(new Language(CultureInfo.GetCultureInfo("de-DE")));
```
***
After applying the various TM settings you need to make sure to update the project settings by applying the [UpdateSettings](../../api/projectautomation/Sdl.ProjectAutomation.FileBased.FileBasedProject.yml#Sdl_ProjectAutomation_FileBased_FileBasedProject_UpdateSettings_Sdl_Core_Globalization_Language_Sdl_Core_Settings_ISettingsBundle_) method to the project as shown below:
# [C#](#tab/tabid-3)
```CS
project.UpdateSettings(settings);
```
***
Note that there is a large number of TM settings available. Some of these settings (e.g. filters) cannot be handled by the Project Automation API alone, but require the Translation Memory API.
See Also
--
[Translation Memory Search Settings](translation_memory_search_settings.md)
[Setting TM Penalties](setting_tm_penalties.md)
[Translation Memory Fields Update](translation_memory_field_update.md)
[Translation Memory Filter Settings](translation_memory_filter_settings.md)
[Project Configuration](project_configuration.md) | 64.931818 | 594 | 0.80574 | eng_Latn | 0.963611 |
f716f45447bccb13d12d9c92b021c9d6e1153e7a | 90 | md | Markdown | readme.md | 1337968347/my-component | 3c0703384a44f82a530117059dd26fa9989cf3ef | [
"MIT"
] | 3 | 2019-04-16T13:01:28.000Z | 2021-07-27T07:45:51.000Z | readme.md | 1337968347/my-component | 3c0703384a44f82a530117059dd26fa9989cf3ef | [
"MIT"
] | null | null | null | readme.md | 1337968347/my-component | 3c0703384a44f82a530117059dd26fa9989cf3ef | [
"MIT"
] | null | null | null | <a href="https://1337968347.github.io/cy-component/">在线DEMO</a>
webComponent 移动端组件 webGL
| 22.5 | 63 | 0.755556 | yue_Hant | 0.183745 |
f7170e71e40cd5fb840c190e002dd3fe3e1f2ea9 | 1,671 | md | Markdown | docs/setting-react-app/handle-404s.md | eksant/serverless-react-aws | 08e5789e8de5fa9957513e398d41752d2479f1f5 | [
"MIT"
] | 6 | 2018-06-23T08:19:50.000Z | 2022-02-07T06:08:16.000Z | docs/setting-react-app/handle-404s.md | eksant/serverless-react-aws | 08e5789e8de5fa9957513e398d41752d2479f1f5 | [
"MIT"
] | null | null | null | docs/setting-react-app/handle-404s.md | eksant/serverless-react-aws | 08e5789e8de5fa9957513e398d41752d2479f1f5 | [
"MIT"
] | 7 | 2018-06-28T07:06:34.000Z | 2020-03-04T05:37:59.000Z | ### **Handle 404s**
Now that we know how to handle the basic routes; let’s look at handling 404s with the React Router.
#### Create a Component
Let’s start by creating a component that will handle this for us.
Create a new component at src/containers/NotFound.js and add the following.
```
import React from "react";
import "./NotFound.css";
export default () =>
<div className="NotFound">
<h3>Sorry, page not found!</h3>
</div>;
```
All this component does is print out a simple message for us.
Let’s add a couple of styles for it in src/containers/NotFound.css.
```
.NotFound {
padding-top: 100px;
text-align: center;
}
```
#### Add a Catch All Route
Now we just need to add this component to our routes to handle our 404s.
Find the <Switch> block in src/Routes.js and add it as the last line in that section.
```
{ /* Finally, catch all unmatched routes */ }
<Route component={NotFound} />
```
This needs to always be the last line in the <Route> block. You can think of it as the route that handles requests in case all the other routes before it have failed.
And include the NotFound component in the header by adding the following:
```
import NotFound from "./containers/NotFound";
```
And that’s it! Now if you were to switch over to your browser and try clicking on the Login or Signup buttons in the Nav you should see the 404 message that we have.
![Router 404 page screenshot](https://d33wubrfki0l68.cloudfront.net/53e130e25da61cdebf79ba98d5d4eff2c07de7b9/46256/assets/router-404-page.png)
Next up, we are going to configure our app with the info of our backend resources.
[[Back]](https://github.com/eksant/serverless-react-aws)
| 29.839286 | 166 | 0.734889 | eng_Latn | 0.995321 |
f71844354a1a5c233d091a5079fa158d452109e0 | 455 | md | Markdown | CHANGELOG.md | splittingred/multitrap | 48c679a5e64941c5d691f9762be5368de56e9e4e | [
"MIT"
] | 6 | 2015-08-10T02:46:20.000Z | 2016-01-10T00:18:06.000Z | CHANGELOG.md | splittingred/multitrap | 48c679a5e64941c5d691f9762be5368de56e9e4e | [
"MIT"
] | 4 | 2015-07-20T12:03:55.000Z | 2019-11-14T17:43:38.000Z | CHANGELOG.md | splittingred/multitrap | 48c679a5e64941c5d691f9762be5368de56e9e4e | [
"MIT"
] | 4 | 2016-08-12T14:52:11.000Z | 2019-11-08T16:07:29.000Z | Multitrap changelog
===================
### v0.1.0 (August, 9, 2015)
* Added support for JRuby
* Added support for Rubinius
* Added support for CRuby 1.9
* Added support for symbols as signal keys ([#1](https://github.com/kyrylo/multitrap/issues/1))
* Added support for previously defined signals ([#2](https://github.com/kyrylo/multitrap/issues/2))
* Changed order of execution of handlers (FIFO now)
### v0.0.1 (September 7, 2014)
* Initial release
| 28.4375 | 99 | 0.696703 | eng_Latn | 0.912682 |
f7189947ec4266989b9ac5047d48782089559780 | 15,865 | md | Markdown | docs/ListGroupsUi.md | xaviertnc/onefile | b2a9226f87852f534565eea7730e0917d320ec64 | [
"MIT"
] | null | null | null | docs/ListGroupsUi.md | xaviertnc/onefile | b2a9226f87852f534565eea7730e0917d320ec64 | [
"MIT"
] | null | null | null | docs/ListGroupsUi.md | xaviertnc/onefile | b2a9226f87852f534565eea7730e0917d320ec64 | [
"MIT"
] | null | null | null | ANATOMY OF A GROUPED LIST UI!
=============================
Example List:
-------------
[
{ id:1 , type: 'flower', color: 'red' , variant: 'daisy' , description: 'F1' },
{ id:2 , type: 'flower', color: 'green' , variant: 'daisy' , description: 'F2' },
{ id:3 , type: 'flower', color: 'green' , variant: 'daisy' , description: 'F3' },
{ id:4 , type: 'flower', color: 'blue' , variant: 'violet' , description: 'F4' },
{ id:5 , type: 'flower', color: 'blue' , variant: 'violet' , description: 'F5' },
{ id:6 , type: 'dress' , color: 'black' , variant: 'evening', description: 'D1' },
{ id:7 , type: 'dress' , color: 'black' , variant: 'evening', description: 'D2' },
{ id:8 , type: 'dress' , color: 'gold' , variant: 'evening', description: 'D3' },
{ id:9 , type: 'dress' , color: 'red' , variant: 'day' , description: 'D4' },
{ id:10, type: 'dress' , color: 'white' , variant: 'day' , description: 'D5' },
{ id:11, type: 'car' , color: 'white' , variant: 'work' , description: 'C1' },
{ id:12, type: 'car' , color: 'white' , variant: 'work' , description: 'C2' },
{ id:13, type: 'car' , color: 'red' , variant: 'play' , description: 'C3' },
{ id:14, type: 'car' , color: 'silver', variant: 'work' , description: 'C4' },
{ id:15, type: 'car' , color: 'silver', variant: 'work' , description: 'C5' }
{ id:16, type: 'car' , color: '' , variant: 'work' , description: 'C6 '}
{ id:17, type: '' , color: '' , variant: 'work' , description: 'A1' }
{ id:18, type: 'car' , color: '' , variant: '' , description: 'C7 '}
{ id:19, type: '' , color: '' , variant: '' , description: 'B1 '}
]
Example Implementation Code:
----------------------------
ListUiManager::construct($listItems, $groupByProps, $aggregates = [])
$listManager = new ListUiManager($model->listItems, ['groupByPropName1', 'groupByPropName2']);
e.g. $listManager = new ListUiManager($model->listItems, ['type', 'color'], ['count' => 'id']);
$html = '';
foreach (listManager->uiSegments as $uiSegment))
{
switch ($uiSegment->type)
{
case 'headerSegment' : $html .= Ui:renderGroupHeaderHtml($uiSegment);
case 'openSegment' : $html .= Ui:renderGroupOpenHtml($uiSegment);
case 'closeSegment' : $html .= Ui:renderGroupCloseHtml($uiSegment);
case 'itemSegment' : $html .= Ui:renderItemHtml($uiSegment);
}
}
List Segment Type Examples:
---------------------------
headerSegment: <h1><i class="icon-plus"></i> { $item->type } - { $item->color }</h1>
openSegment: <ul id="{$group->id}" class="list-group">
itemSegment: <li id="{$item->id}" class="item">{ $item->description }</li>
closeSegment: </ul>
List Visual Layout:
-------------------
ROOT HEAD SEGMENT: group = { level: 0, id: root, groupBy: none }
ROOT OPEN SEGMENT: group = { level: 0, id: root, groupBy: none }
HEAD SEGMENT: group = { level: 1, id: root~flower, groupBy: type }
OPEN SEGMENT: group = { level: 1, id: root~flower, groupBy: type }
HEAD SEGMENT: group = { level: 2, id: root~flower~red, groupBy: color }
OPEN SEGMENT: group = { level: 2, id: root~flower~red, groupBy: color }
HEAD SEGMENT: group = { level: 2, id: root~flower~red~daisy, groupBy: variant }
OPEN SEGMENT: group = { level: 2, id: root~flower~red~daisy, groupBy: variant }
ITEM SEGMENT: item = { id: 1, type: flower, color: red, variant: daisy, description: F1 }
CLOSE SEGMENT: group = { id: root~flower~red~daisy }
CLOSE SEGMENT: group = { id: root~flower~red }
HEAD SEGMENT: group = { level: 2, id: root~flower~green, groupBy: color }
OPEN SEGMENT: group = { level: 2, id: root~flower~green, groupBy: color }
HEAD SEGMENT: group = { level: 2, id: root~flower~green~daisy, groupBy: variant }
OPEN SEGMENT: group = { level: 2, id: root~flower~green~daisy, groupBy: variant }
ITEM: { id: 2, type: flower, color: green, variant: daisy, description: F2 }
ITEM: { id: 3, type: flower, color: green, variant: daisy, description: F3 }
CLOSE SEGMENT: group = { id: root~flower~green~daisy }
CLOSE SEGMENT: group = { id: root~flower~green }
HEAD SEGMENT: group = { level: 2, id: root~flower~blue, groupBy: color }
OPEN SEGMENT: group = { level: 2, id: root~flower~blue, groupBy: color }
HEAD SEGMENT: group = { level: 2, id: root~flower~blue~violet, groupBy: variant }
OPEN SEGMENT: group = { level: 2, id: root~flower~blue~violet, groupBy: variant }
ITEM SEGMENT: { id: 4, type: flower, color: blue, variant: violet, description: F4 }
ITEM SEGMENT: { id: 5, type: flower, color: blue, variant: violet, description: F5 }
CLOSE SEGMENT: group = { id: root~flower~blue~violet }
CLOSE SEGMENT: group = { id: root~flower~blue }
CLOSE SEGMENT: group = { id: root~flower }
...
// SPECIAL CASES...
// Missing COLOR property!
HEAD SEGMENT: group = { level: 1, id: root~car, groupBy: type }
OPEN SEGMENT: group = { level: 1, id: root~car, groupBy: type }
HEAD SEGMENT: group = { level: 2, id: root~car~work, groupBy: variant }
OPEN SEGMENT: group = { level: 2, id: root~car~work, groupBy: variant }
ITEM SEGMENT: { id: 16, type: car, color: '', variant: work, description: C6 }
CLOSE SEGMENT: group = { id: root~car~work }
CLOSE SEGMENT: group = { id: root~car }
// Missing TYPE + COLOR properties!
HEAD SEGMENT: group = { level: 1, id: root~work, groupBy: variant }
OPEN SEGMENT: group = { level: 1, id: root~work, groupBy: variant }
ITEM SEGMENT: { id: 17, type: '', color: '', variant: work, description: A1 }
CLOSE SEGMENT: group = { id: root~work }
// Missing COLOR + VARIANT properties!
HEAD SEGMENT: group = { level: 1, id: root~car, groupBy: type }
OPEN SEGMENT: group = { level: 1, id: root~car, groupBy: type }
ITEM SEGMENT: { id: 18, type: 'car', color: '', variant: '', description: C7 }
CLOSE SEGMENT: group = { id: root~car }
// ZERO GROUPING properties! (i.e. In ROOT GROUP)
ITEM SEGMENT: { id: 19, type: '', color: '', variant: '', description: B1 }
ROOT CLOSE SEGMENT: group = { id: root }
ListGroupsUi
------------
construct ( $listItems, $groupByProps, $aggregates = [] )
// Array of ListUiSegement objects then we need to render
// in sequence to construct the list UI
$uiSegments = []
// The group that holds to TOP-LEVEL list items.
// = new ListUiGroup('root');
$rootGroup
// Provide indexes to items and groups on the TOP LEVEL.
$topLevelListIndex = 1
getListIndex ( $listGroup )
getItemGroup ( $item, array $groupByProps )
onSameBranch ( $group1, $group2 )
updateGroupAggregates ( $currentGroup, $item, $aggregateTypes, $aggregates )
closeAllLevelsFromTo ( $currentGroup, $targetGroup, $item = null )
openNewLevelsFromTo ( $currentGroup, $targetGroup, $item, $groupByProps )
generateUiSegments ( $listItems, $groupByProps, $aggregates )
ListUiGroup
-----------
construct ( $idParts = [], $level = 0, $groupByProp = null, $parentGroup = null )
$id // id = 'root~flower~red' or 'root~12~28' depending on values of grouping properties.
// id is a STRING of incremental indexes that make this group UNIQUE
// NOT a STRING that makes sense in terms of visual placement like
// $listIndex!
$level // Level of THIS GROUP. ROOT GROUP LEVEL = 0;
$idParts // idParts = ['root', 'flower', 'red'] or ['root', '12', '28']
// We can get the grouping property values from this array!
// Say the grouping property is: color_id, then we can get the color id's
$groupByProp // What ITEM PROPERTY we want to group by! (e.g color)
$parentGroup // Unless THIS GROUP is THE ROOT GROUP (level:0, id:root, parentGroup: NULL),
// this value will the parentGroup of THIS GROUP!
$itemCount = 0 // The number of items in this group
$listIndex = 0 // STRING representing the VISUAL numbering and index of this group
// relative to the ENTIRE LIST.
// NOTE: $listIndex is different from $group->id or $item->id!
// e.g. [12.4].1, [12.4].2, [12.4].3, ... listIndex = 12.4
$aggregates = []
ListUiSegment:
--------------
construct ( $type, $item, $group, $itemGroupIndex = null )
$type // Segment Type (i.e. headerSegment, openSegment, ...)
$item // Segment Data Item Obj
$item->_index // The natural (1,2,3,..) NON INDENTED index of THIS ITEM relative
// to the ENTIRE LIST. (_index is independent of grouping!)
// Use when you DON'T WANT:
// "1. Item 1"
// "1.1. Item 2"
// "1.2. Item 3"
// "2. Item 4"
// ...
// But rater want:
// "1. Item 1"
// "2. Item 2"
// "3. Item 3"
// "4. Item 4"
// ...
$group // Segment ListUiGroup Obj
$itemGroupIndex // The index of THIS ITEM inside its group.
// e.g. "1.2.[1]", "2.[3]", "4.1.[1]", ...
// itemGroupIndex = 1, 3, 1, ...
getItemListNumber () // Generates a STRING with THIS ITEM's visual list number.
// $group->listIndex + $itemGroupIndex
// e.g. "1.2.1", "2.3", "4.1.1", ...
ListUiSegment Instance Examples:
--------------------------------
[1] => OneFile\ListUiSegment Object
(
[type] => openSegment
[item] => stdClass Object
(
[id] => 1538
[seq] => 0
[pakket_id] => 51
[uitstallingtipe_id] => 9
[areatipe_id] => 6
[opsietipe_id] => 2
[opsiesubtipe_id] => 0
[uitstalopsie_id] => 132
[vormVeld] => OneFile\FormFieldModel Object
...
[_index] => 1
)
[group] => OneFile\ListUiGroup Object
(
[id] => root~2
[level] => 1
[idParts] => Array
(
[0] => root
[1] => 2
)
[groupByProp] => opsietipe_id
[parentGroup] => OneFile\ListUiGroup Object
(
[id] => root
[level] => 0
[idParts] => Array
(
[0] => root
)
[groupByProp] =>
[parentGroup] =>
[itemCount] => 7
[listIndex] => 0
[aggregates] => Array
(
)
)
[itemCount] => 2
[listIndex] => 1
[aggregates] => Array
(
[count] => 2
)
)
[itemGroupIndex] => 0
)
...
[39] => OneFile\ListUiSegment Object
(
[type] => itemSegment
[item] => stdClass Object
(
[id] => 1602
[seq] => 0
[pakket_id] => 51
[uitstallingtipe_id] => 9
[areatipe_id] => 6
[opsietipe_id] => 5
[opsiesubtipe_id] => 3
[uitstalopsie_id] => 145
...
[_index] => 18
)
[group] => OneFile\ListUiGroup Object
(
[id] => root~5~3
[level] => 2
[idParts] => Array
(
[0] => root
[1] => 5
[2] => 3
)
[groupByProp] => opsiesubtipe_id
[parentGroup] => OneFile\ListUiGroup Object
(
[id] => root~5
[level] => 1
[idParts] => Array
(
[0] => root
[1] => 5
)
[groupByProp] => opsietipe_id
[parentGroup] => OneFile\ListUiGroup Object
(
[id] => root
[level] => 0
[idParts] => Array
(
[0] => root
)
[groupByProp] =>
[parentGroup] =>
[itemCount] => 7
[listIndex] => 0
[aggregates] => Array
(
)
)
[itemCount] => 3
[listIndex] => 4
[aggregates] => Array
(
)
)
[itemCount] => 3
[listIndex] => 4.2
[aggregates] => Array
(
[count] => 3
)
)
[itemGroupIndex] => 3
)
ListUiGroup Instance Examples:
------------------------------
[group] => OneFile\ListUiGroup Object
(
[id] => root~4~5
[level] => 2
[idParts] => Array
(
[0] => root
[1] => 4
[2] => 5
)
[groupByProp] => opsiesubtipe_id
[parentGroup] => OneFile\ListUiGroup Object
(
[id] => root~4
[level] => 1
[idParts] => Array
(
[0] => root
[1] => 4
)
[groupByProp] => opsietipe_id
[parentGroup] => OneFile\ListUiGroup Object
(
[id] => root
[level] => 0
[idParts] => Array
(
[0] => root
)
[groupByProp] =>
[parentGroup] =>
[itemCount] => 3
[listIndex] => 0
[aggregates] => Array
(
)
)
[itemCount] => 3
[listIndex] => 3
[aggregates] => Array
(
[count] => 3
)
)
[itemCount] => 0
[listIndex] => 3.4
[aggregates] => Array
(
)
)
[group] => OneFile\ListUiGroup Object
(
[id] => root~4~6
[level] => 2
[idParts] => Array
(
[0] => root
[1] => 4
[2] => 6
)
[groupByProp] => opsiesubtipe_id
[parentGroup] => OneFile\ListUiGroup Object
(
[id] => root~4
[level] => 1
[idParts] => Array
(
[0] => root
[1] => 4
)
[groupByProp] => opsietipe_id
[parentGroup] => OneFile\ListUiGroup Object
(
[id] => root
[level] => 0
[idParts] => Array
(
[0] => root
)
[groupByProp] =>
[parentGroup] =>
[itemCount] => 3
[listIndex] => 0
[aggregates] => Array
(
)
)
[itemCount] => 4
[listIndex] => 3
[aggregates] => Array
(
[count] => 3
)
)
[itemCount] => 0
[listIndex] => 3.5
[aggregates] => Array
(
)
) | 30.986328 | 107 | 0.463158 | yue_Hant | 0.443603 |
f71a57c35a2e76952ff45a0ff69ffdadf3e09dbd | 4,710 | md | Markdown | README.md | kenjisato/lyx.local | 16afcea6893668dabe42c6e4d3a8b76bb7e887a9 | [
"MIT"
] | 1 | 2021-03-20T01:31:38.000Z | 2021-03-20T01:31:38.000Z | README.md | kenjisato/lyx.local | 16afcea6893668dabe42c6e4d3a8b76bb7e887a9 | [
"MIT"
] | null | null | null | README.md | kenjisato/lyx.local | 16afcea6893668dabe42c6e4d3a8b76bb7e887a9 | [
"MIT"
] | null | null | null | # lyx.local
LyX customization files to make my life easier. With this tool I (and possibly you too) can
- develop local styles in one place with joy,
- deploy and reconfigure easily with shell commands.
*Disclaimer*: Use this tool at your own risk; it may erase important files in your local [UserDir](https://wiki.lyx.org/LyX/UserDir). I would recommend against trying it when you have an approaching deadline.
## 1. Download and initialize
I assume that you have LyX and `bash`. `git` and `make` are recommended but optional. (You can download ZIP instead of `git clone`. `make init` is just and alias for `bash etc/init.sh`, `make deploy` for `bash etc/deploy.sh`, etc.)
If you are on Microsoft Windows, use WSL, MSYS2, MINGW, or Cygwin, though not fully tested on these platforms.
Open a Terminal window, then run
```bash
cd path/to/your/favorite/location
git clone https://github.com/kenjisato/lyx.local
cd lyx.local
make init
```
If you are asked to do so, check `etc/config` file. Since your system and/or application layout are not supported, automatic initialization failed. You must properly define variables `Python`, `LyX`, `LyXDir`, and `UserDir`.
MS Windows users don't need to install Python yourself, though it's very useful. LyX's Windows binary is shipped with it.
## 2. Modify
Take a look at `etc/deploy_targets` file. It defines three array variables.
- `directories`: included directories,
- `exclude`: excluded files, and
- `require_backup`: important files that need to make backup when a duplicate is found in UserDir.
All paths are relative to `LyX` and you can use wildcard. By default, anything in the following directories to LyX's `UserDir`. No exclusion, no backups.
```
LyX
├── bind
├── clipart
├── doc
├── examples
├── images
├── kbd
├── layouts
├── scripts
├── templates
└── ui
```
It is highly likely that you will never need some layout files I ship. You can simply remove those file from `LyX` directory. **NB**: When I make updates to the repository and when you `git pull` after, then those deleted files will come back again.
I put blank `etc/deploy_targets.local` file for your convenience. _I promise_ that I will keep this file as is so that any modification you make to this file won't be affected. This file is read after `etc/deploy_targets` and so if you re-define the array variables (`directories`, `exclude`, `require_backup`), the original definitions are overwritten.
**Example**: Maybe you don't want layout files for `bxjscls` and `pxrubrica` module (both for documents in Japanese), then you write something like below:
```
# etc/deploy_targets.local
exclude=('layouts/bxjs*' 'layouts/pxrubrica.module')
```
## 3. Deploy and reconfigure
Now you can deploy the customization files. With "deploy", I mean to copy files in LyX directory in the local repository to LyX's UserDir. This is done by
```
make deploy # Or, bash etc/deploy.sh
```
Then you need to Reconfigure LyX. The following command will execute `configure.py` from within UserDir.
```
make reconfigure # Or, bash etc/reconfigure.sh
```
To do both in one go, run
```
make deploy reconfigure
```
Restart LyX and you should see the changes take effect.
### Tips for Mac users
Clicking the red cross button is not enough. Type ⌘Q to properly shut down the app. Or, if don't want to leave Terminal, you may want to use the following command:
```
osascript -e 'quit app "LyX"'
```
Then the following command will re-open your demo document.
```
open demo/yourdemo.lyx
```
## 4. Write demos
When you are developing a module or layout file, you should be writing a LyX file for testing. The place for those demos is the `demo` direcory.
Place a LyX file there and you can compile it to PDF by running
```
make demo
```
This command compiles each LyX file in `demo` directory when the final PDF output is missing or older than the source file. By default the converter format is set to `pdf5`, i.e., the PDF file is created with LuaTeX. If you want to use pdflatex, then
```
make demo FORMAT=pdf2
```
For your information, LyX understands the following PDF formats:
- `pdf` (ps2pdf)
- `pdf2` (pdflatex)
- `pdf3` (dvipdfm)
- `pdf4` (xetex)
- `pdf5` (luatex)
You may desire to make pdflatex the default driver. Then, please edit your local `makefile` at the line defining `FORMAT` variable.
**What's next?**
Someday, you feel comfortable with the behavior of your modules and layouts. Then, move the demo LyX file to `LyX/doc` directory. By doing so, you can open the document from within LyX.
## Wish list
- Provide a way to delete a corresponding file in `UserDir` when a local development version is renamed.
- Write some more layout and module files.
| 35.149254 | 353 | 0.741189 | eng_Latn | 0.996934 |
f71b09f900d0e81e0003161518bb9e15070ea07a | 1,054 | md | Markdown | tech/python/setup/Python_Setup_PyCharm_Errors.md | YingVickyCao/YingVickyCao.github.io | a033ac10d54cb9b5e5338c451715d52549272ae1 | [
"Apache-2.0"
] | null | null | null | tech/python/setup/Python_Setup_PyCharm_Errors.md | YingVickyCao/YingVickyCao.github.io | a033ac10d54cb9b5e5338c451715d52549272ae1 | [
"Apache-2.0"
] | null | null | null | tech/python/setup/Python_Setup_PyCharm_Errors.md | YingVickyCao/YingVickyCao.github.io | a033ac10d54cb9b5e5338c451715d52549272ae1 | [
"Apache-2.0"
] | null | null | null | # PyCharm FAQ
# 1 PyCharm[2019.2] 导入本地 py 文件时,模块下方出现红色波浪线时如何解决
出现红色波浪线的原因是因为本地路径并未被标记“源目录”
Step 1: 标识为 Sources
![PyCharm_import_tip_red_1](/img/PyCharm_import_tip_red_1.png)
Step 2 : “Add source roots to PYTHONPAT”
![PyCharm_import_tip_red_2](/img/PyCharm_import_tip_red_2.png)
Step 3: “Mark Directory as”为“Sources Root”
![PyCharm_import_tip_red_3](/img/PyCharm_import_tip_red_3.png)
# 2 PyCharm[2019.2] keyword argiment 变 orange color
![PyCharm_keyword_argument_color_is_orange](/img/PyCharm_keyword_argument_color_is_orange.jpg)
Pycharm 的主题设置对参数的显示颜色为这个.
可以改 : 在 Preferences -> Color Scheme -> Python -> Keyword argument
# 3 pycharm[2019.2] 不代码提示 pygame
Step 1 : 检查是否代码本身有问题。
Step 2 : 检查代码提示是否成功开启。
setting → Inspections → Spelling 开启
setting → Inspections → Python 不打开/打开
Step 3 : 检查 IDE 省电模式是否关闭状态
file → power save mode 取消掉
Step 4 : 对第三方库支持不好,不能显示/部分显示. etc, pygame.(Resolved)
pip 删除 pygame,然后离线安装 pygame。Restart PyCharm by `Invalidate and Restart`
http://www.pygame.org/download.shtml
| 26.35 | 95 | 0.748577 | yue_Hant | 0.437278 |
f71b36cdcff206416ffe7403facf6d83f099a5d5 | 55 | md | Markdown | README.md | hayorov/tekton-ws | ae48bd2086eb6b2b00dbe7739ebccc2ff843fef1 | [
"Apache-2.0"
] | null | null | null | README.md | hayorov/tekton-ws | ae48bd2086eb6b2b00dbe7739ebccc2ff843fef1 | [
"Apache-2.0"
] | null | null | null | README.md | hayorov/tekton-ws | ae48bd2086eb6b2b00dbe7739ebccc2ff843fef1 | [
"Apache-2.0"
] | null | null | null | # tekton-ws
Tekton CD workshop at FOSSASIA Summit 2020
| 18.333333 | 42 | 0.8 | ind_Latn | 0.214445 |
f71e45d45fe898537d483cca435784314a8e2ee4 | 2,009 | md | Markdown | README.md | leeeastwood/Haiway | 153b6861f864966c454f97febe03ae191a9a8657 | [
"MIT"
] | 162 | 2019-01-04T14:23:52.000Z | 2021-12-26T05:51:34.000Z | README.md | leeeastwood/Haiway | 153b6861f864966c454f97febe03ae191a9a8657 | [
"MIT"
] | 6 | 2019-01-04T14:32:15.000Z | 2020-08-07T06:47:34.000Z | README.md | leeeastwood/Haiway | 153b6861f864966c454f97febe03ae191a9a8657 | [
"MIT"
] | 130 | 2019-01-04T14:24:33.000Z | 2021-06-25T16:48:56.000Z | # Welcome to the Haiway wiki!
Haiway:an RTOS for Edge Computing
Microkernel RTOS with Virtualization and SMP support for ARMv8-A, ARMv7, RISC-V
## What is Haiway
Haiway is a real-time priority-based microkernel RTOS with virtualization support for ARMv8-A that provides the trusted reliability and performance for edge computing while also allowing multiple operating systems to safely co-exist on the same System on Chip (SoC).
Haiway defines a hypervisor reference stack and an architecture for running multiple software subsystems, managed securely, on a consolidated system by means of a virtual machine manager. Haiway can be used as a Type 1 reference hypervisor stack, running directly on the bare-metal hardware, and is suitable for a variety of AIoT and edge device solutions. Haiway addresses the gap that currently exists between embedded operating system and heterogeneous hardware devices. The hypervisor architecture partitions the system into different functional domains, with carefully selected guest OS sharing optimizations for AIoT and embedded devices.
Haiway is also designed as a real-time priority-based microkernel RTOS that support SMP, currently support ARMv8-A, But can be easily ported to other platforms and architectures like Cortex-M MCU.
## Supported hardware:
- [x] ESP32
- [x] STM32
- [x] Raspberry 3B/4
- [x] Huawei Hi 3516/3519/3559
- [x] Intel NUC
Haiway can be easily ported to other arm and risc-v based platform.
## Documentation
We will have various README files in the Documents subdirectory. Please refer Documents for a list of what is contained in each file or sub-directory.
### MicroServices Documents:
1.1 [Basic services](http://39.105.15.119:8072/dist/microservice.html)
1.2 [Alogrithms services](http://39.105.15.119:8072/dist/alogrithms.html)
1.3 [Industry services](http://39.105.15.119:8072/dist/industry.html)
Blockly Components Documents:
2. [Low-Code Development Platform Fucnction List](http://39.105.15.119:8072/dist/block.html)
| 52.868421 | 644 | 0.79343 | eng_Latn | 0.993994 |
f71e4e5801798453b0d83a7c0e940847d0f3871c | 698 | md | Markdown | README.md | felipecpc/YOLO-Tools | 14e887f294ee6213d0b0b1320d54e4e4422a7390 | [
"MIT"
] | null | null | null | README.md | felipecpc/YOLO-Tools | 14e887f294ee6213d0b0b1320d54e4e4422a7390 | [
"MIT"
] | null | null | null | README.md | felipecpc/YOLO-Tools | 14e887f294ee6213d0b0b1320d54e4e4422a7390 | [
"MIT"
] | null | null | null | # YOLO-Tools
Useful scripts to prepare a testing dataset.
## Convert to jpg
This script will convert the images in a given folder to jpg and also resize it.
```
imageSize = (800,600)
imagePath = "Images/001/*.*"
datasetOutput = "dataset"
```
``` python3 convert2jpg.py ```
## Annotation
### Annotation Tool
Tool to mark the elements in the image
``` python3 annotation.py ```
### Convert to Darknet
Take the output from annotation tool and convert to the format expected by darknet
``` python3 convert2darknet.py```
## Split
Split the dataset between training and test
``` python3 split.py```
## Darkflow
Convert the darknet annotation file to
``` python3 darknet2darkflow.py``` | 18.368421 | 82 | 0.716332 | eng_Latn | 0.949813 |
f71eaaa54498874c2d84f0aa23d59d4d86b01a00 | 307 | md | Markdown | content/lua/eq/cross_zone_remove_spell_by_guild_id.md | xackery/eqquestapi | e3bb4d58651c7c2bb1ced94deb59115946eed3c5 | [
"MIT"
] | null | null | null | content/lua/eq/cross_zone_remove_spell_by_guild_id.md | xackery/eqquestapi | e3bb4d58651c7c2bb1ced94deb59115946eed3c5 | [
"MIT"
] | 1 | 2020-09-08T17:21:08.000Z | 2020-09-08T17:21:08.000Z | content/lua/eq/cross_zone_remove_spell_by_guild_id.md | xackery/eqquestapi | e3bb4d58651c7c2bb1ced94deb59115946eed3c5 | [
"MIT"
] | 1 | 2020-08-29T00:49:26.000Z | 2020-08-29T00:49:26.000Z | ---
title: cross_zone_remove_spell_by_guild_id
searchTitle: Lua EQ cross_zone_remove_spell_by_guild_id
weight: 1
hidden: true
menuTitle: cross_zone_remove_spell_by_guild_id
---
## cross_zone_remove_spell_by_guild_id
```lua
eq.cross_zone_remove_spell_by_guild_id(number guild_id, number spell_id) -- void
``` | 27.909091 | 80 | 0.843648 | eng_Latn | 0.545568 |
f71fa90ddb3d9f10c71eeda992c306f1fc8d472a | 242 | md | Markdown | org/docs/measurements/bustspan/nl.md | lusty/markdown | 4fbfb8964cde4716739f8653a69839aac590940d | [
"MIT"
] | 7 | 2019-06-26T14:03:06.000Z | 2021-07-11T03:48:31.000Z | org/docs/measurements/bustspan/nl.md | lusty/markdown | 4fbfb8964cde4716739f8653a69839aac590940d | [
"MIT"
] | 76 | 2019-06-12T07:03:47.000Z | 2021-08-15T22:55:57.000Z | org/docs/measurements/bustspan/nl.md | lusty/markdown | 4fbfb8964cde4716739f8653a69839aac590940d | [
"MIT"
] | 51 | 2019-07-02T07:39:40.000Z | 2021-11-18T17:11:20.000Z | ---
title: Bustewijdte
---
The **bust span** is the horizontal distance between the two apex points of your bust.
Om de bustewijdte te bepalen meet je horizontaal en in een rechte lijn de afstand van de ene top van de borst naar de andere.
| 30.25 | 125 | 0.756198 | nld_Latn | 0.999121 |